The purpose of a program evaluation is to assess a program's performance. Program evaluation can help organizations, such as nonprofits, Extension, and governmental entities, solve implementation failure or program theory failure. Implementation failure relates to program execution; program theory failure means flawed program design. Troubleshooting implementation failure is more common. This article explains how to (1) identify which type of failure nonprofit organizations may be facing, (2) choose the right evaluation tool in order to determine the specific problem, (3) interpret process evaluation results, and (4) use the evaluation findings for continuous program improvement.
Identification of the Problem: Process versus Effectiveness
One must first determine which type of failure the organization faces: program theory failure or implementation failure. Program theory failure occurs when a planned program, process, or set of strategies is insufficient to reach desired outcomes (Anderson, 2005; Shapiro, 1982). Implementation failure occurs when a planned operation or set of strategies is not correctly put into practice (Durlak & DuPre, 2008; Fixsen, 2005; Meyers, Durlak, & Wandersman, 2012). Implementation failure is when a program does not adequately perform the activities and functions specified in the program design assumed to be necessary for bringing about the intended benefits.
Two examples below compare and contrast the two types of program failure. A logic model is a helpful tool for locating a programming problem. Table 1 presents the two program failure zones in logic model format.
Table 1. A farmers' association's sustainable farming certification curriculum logic model.
Example of Program Theory Failure
A farmers’ association's sustainable agriculture certification training curriculum is built upon the theory that, “If farmers learn environmentally sustainable agricultural practices, they will reduce environmental pollutants.” After providing a well-attended educational program for small-scale tomato farmers, the association noticed that the program did not result in a decrease in environmental pollutants produced by farms. They wanted to identify the cause. An outcome evaluation helped the association identify three critical problems: (1) the farmers' motivation to implement sustainable practices did not change; (2) the practices required more time and effort, and thus sustainable practices were associated with a lower profit; and (3) the farmers who participated in the program ultimately did not change their farming practices.
This first example demonstrates a few characteristics of a farmers’ association sustainability certification course experiencing program theory failure. A program theory failure occurred when the curriculum was correctly implemented and well-attended. Still, the farmers did not change their attitudes about sustainability. Farmers were unwilling or unable to commit the time and resources to engage in new farming techniques. The course proved ineffective at increasing sustainable farming practices. Ultimately, the program failed to reach the organization's long-term goals. Changing the program's implementation would not resolve this program theory failure problem. To learn how to fix a program theory failure using an outcome evaluation, see Evaluation: A Systematic Approach (Rossi, Lipsey, & Henry, 2019).
Example of Implementation Failure
A farmers’ association observed that participants begin their sustainable agriculture certification educational programs, but few complete the certification process. The education program manager wants to identify the problem. The program manager tracked attendance and observed that participants were not attending all of the sessions for several reasons. A process evaluation helped the organization identify three fundamental problems: (1) the program is provided synchronously online, which is an issue due to the limited access to high-speed Internet in rural areas; (2) the instructors only spoke English, which excluded non-English speaking farmers; and (3) two certification classes occurred during peak harvest time, limiting farmers' attendance.
In this second example, the problem was the course presentation modality, the exclusion of possible clients, or lack of attendance. An implementation failure occurred when low attendance was associated with how the farmers accessed the course. The problem was not associated with whether the curriculum affected the farmers’ knowledge or decision-making. This evaluation demonstrates that the error was in the farmers' association's execution of the program, not the curriculum. Changing how the program is delivered would likely resolve this implementation failure. A process evaluation is a correct tool for assessing this type of problem.
Process Evaluation Design
As in any empirical research, the evaluation question should help determine the data source, data collection instrument, and analysis. Process evaluation questions differ from outcome evaluation questions because the program's effectiveness is typically irrelevant. In a process evaluation, one considers the relationship between programmatic inputs, activities, and outputs. A process evaluation assumes the program theory is correct. The data collection instrument for a process evaluation does not capture changes in knowledge, skill, behavior, or conditions. Instead, the instrument assesses the implementation steps. Determining the evaluation question and data collection instruments are the most important steps in the process evaluation design.
Process Evaluation Questions
Example Question 1: Does resource allocation align with the scope of service?
Programs require meeting facilities, an evidence-based curriculum, and skilled and knowledgeable facilitators. Without sufficient resources, a program will likely fail. The desired participation rates and characteristics should determine resource allocation. More participants probably mean that more employees or employee hours, more printed materials, and a larger facility should be dedicated to the program. One may then conduct a cost-per-participant analysis to determine the assignment of employees and facilities.
Example Question 2: Does stakeholder demand align with programmatic outputs?
Needs assessments estimate problems within a community or client population. A needs assessment also identifies obstacles and barriers that could prevent the target audience from accessing the program (Padgett et al., 2016). The organization could request feedback from the community residents or potential clients about its abilities to meet their needs. An evaluator would assess the program to ensure it is accessible to the target population and that the participants use the program as intended. For example, a waiting list for access to a 4-H program would indicate high demand. The program manager could adjust the program design to accommodate the greater demand.
Example Question 3: Do program goals align with service participation?
Nonprofits or Extension agencies can compare service use to the size or scope of their target population by using community indicators of their intended clients' or customers' needs (Rossi et al., 2019). Service participation problems typically break down into questions about coverage and bias (Rossi et al., 2019). Failure to integrate the nonprofit's mission into day-to-day programming is a common mistake. An organization's mission should be communicated frequently and clearly to maintain support and commitment from organization members (McDonald, 2007). A lack of awareness and understanding of the organization's mission often results in a vague understanding of the organization's target population and a misallocation of resources. Setting goals that align with the target population's primary concerns and needs and assigning the resources more strategically could correct this (Rossi et al., 2019). For example, providing synchronous online sustainability certification education to farmers without access to broadband limits service use. Farmers would need to travel to a location with broadband, like a library or Extension office, which may not be feasible. A program redesign should eliminate the need for strong and extended Internet signal.
Evaluation Instruments to Help Assess the Problem
Process evaluation instruments should avoid simply collecting subjective opinions. They should objectively identify the problem (Padgett et al., 2016). The data collected by this instrument should be easy to interpret, reliable, and valid.
Example of Data Collection Instruments and Data Sources
The effectiveness of the data collection instrument depends on what the evaluator wants to measure. Examples of data collection instruments include the following.
Pre-tests and post-tests: Written or oral tests to measure a change in knowledge or attitudes of program participants.
Participant sign-in sheets: Logs or documentation of participant attendance to measure the program's reach.
Customer service surveys: Questionnaires to measure customer or client satisfaction with a service or product.
Task completion checklists: Step-by-step guides for each activity to measure program implementation accuracy and quality.
Annual personnel and volunteer performance reviews: Assessments conducted by a manager to measure the strengths and weaknesses in their employee's or volunteer's work.
Community indicators: Data collected by government or private entities that capture population-level counts of important health, safety, and economic concerns. These are used to measure the need or demand for services. Examples include poverty rates, school grades, and crime statistics.
The demand for services in the community compared to service participation can help to determine program efficiency. Client satisfaction surveys, focus groups, interviews, and observations can assess demand for services. Community indicators such as income levels, eviction rates, incidence of disease, or the number of families receiving Women, Infants, and Children or Supplemental Nutrition Assistance Program benefits can help quantify the need for programming. Community indicators help determine the scope of the program's target population. Organizations can conduct cost-benefit analyses or family customer satisfaction surveys to determine if needs are met and to identify areas for program improvement.
For example, Feeding America is a hunger-relief charity that works with a network of more than 200 food banks and farmers to feed the hungry (Feeding America, 2014). However, food banks often struggle to meet demand for their services. In the case of Second Harvest Heartland, a food bank in Minnesota and western Wisconsin, the COVID-19 pandemic resulted in a food shortage (Fiocco et al., 2020). Through process evaluation, organizations such as Feeding America can use community indicators and client satisfaction surveys to help ensure that their local partners (e.g., Second Harvest) are prepared to meet the specific needs of their target population.
Interpreting Process Evaluation Data
After gathering data from participants and aggregates, the evaluator can begin to interpret causes for implementation failure and build a plan for program improvement. An evaluator may find the program is inappropriate for the audience, has low feasibility or fidelity, or has an ill-defined program theory. Such problems are attributed to implementation and lead to dysfunctional programming.
When reviewing the data collected from the evaluation instrument, an evaluator should assess the appropriateness for the population and the organization. The important questions here are:
- To what extent does the program address an identified need?
- How well does the program align with the organization's priorities or mission?
- Is this organization the right organization in the community to provide the program?
It is also important to ensure the program conforms to the original design of the curriculum or protocols. Observation logs may demonstrate a lack of fidelity in program provision and data collection. When implementation lacks fidelity, each facilitator implements the program differently, or participants do not complete the program as prescribed. An evaluator can document facilitators' behaviors or engagement with participants over time to identify patterns and problem areas. Organizational feasibility problems could be due to a lack of training, insufficient management oversight, materials, or facilities, or too many employees or volunteers involved in implementation. Feasibility problems associated with participants may be inconsistent participation, language barriers, or spotty Internet access.
Accessibility and enjoyment are important for satisfactory program completion. Customer satisfaction surveys provide rich data for process evaluations. Attendance logs capture program completion, while satisfaction surveys capture the aspects of the program a participant easily accessed and most enjoyed. Most importantly, customer satisfaction surveys can identify hidden problems.
By reviewing the program facilitators' daily service reports or logs, an evaluator may find that the cause of program failure may simply be a poorly defined program theory. The program design and implementation instructions may be too vague. Unclear program theory can lead to inadequate evaluation questions and data collection instruments, as well as a lack of fidelity in implementation. These are implementation failure problems. Implementation steps and the purpose of each step need to be clear to the facilitator and the participants.
Engaging in Continuous Program Improvement
Looking Ahead
Development of a program improvement plan using the evaluation results is essential. This action plan should be very specific, manageable, and measurable (Pell Institute, 2021). The plan should be specific, outlining the improvement in outcomes that the organization expects to see. Setting a timeline for completion and achievable and relevant goals can help to ensure the plan is manageable in size and scope. The evaluator should clearly communicate what is expected and how the organization can demonstrate completion of the program improvement steps.
Accountability
To improve programming, the organization must view evaluation as a method of accountability. The evaluation method should be rigorous and detailed. It should collect the appropriate information to conduct a comprehensive evaluation (Hoefer, 2000). The evaluation findings should be concise and easy to interpret, promoting accountability and transparency.
Conclusion
Nonprofits can assess the value of their program through program evaluation, which can help resolve issues related to implementation failure or program theory failure. Implementation failure is more common and is best understood through process evaluation. Process evaluation assesses the program's activities and objectives to determine if they are executed as intended. By addressing these issues, the organization can create an action plan to improve the program.
References
Anderson, A. (2005). An Introduction to Theory of Change. Evaluation Exchange, 11(2), 12–19. http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/evaluation-methodology/an-introduction-to-theory-of-change
Centers for Disease Control and Prevention. (2014). Practical Strategies for Culturally Competent Evaluation. Atlanta, GA: U.S. Department of Health and Human Services.
Durlak, J. A., & DuPre, E. P. (2008). Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation. American Journal of Community Psychology, 41(3–4), 327. https://doi.org/10.1007/s10464-008-9165-0
Feeding America. (2014). Hunger in America Study. https://www.feedingamerica.org/research/hunger-in-america
Fiocco, D., Knoch-Dohlin, M., Lucas, A., & Ruby, B. (2020). Making a Difference with Data: What It Can Mean for Food Banks. McKinsey & Company. https://www.mckinsey.com/featured-insights/food-security/making-a-difference-with-data
Fixsen, D. L. (2005). Implementation Research: A Synthesis of the Literature. National Implementation Research Network.
Hoefer, R. (2000). Accountability in Action?: Program Evaluation in Nonprofit Human Service Agencies. Nonprofit Management and Leadership, 11(2), 167–177. https://doi.org/10.1002/nml.11203
Innovation Network, Inc. (2010). Logic Model Workbook. https://www.innonet.org/news-insights/resources/logic-model-workbook/
James Bell Associates. (2007). Evaluation Brief: What's the Difference? Understanding Process and Outcome Evaluation. Arlington, VA. https://www.jbassoc.com/wp-content/uploads/2018/03/Understanding-Process-Outcome-Evaluation.pdf
McDonald, R. E. (2007). An Investigation of Innovation in Nonprofit Organizations: The Role of Organizational Mission. Nonprofit and Voluntary Sector Quarterly, 36(2), 256–281. https://doi.org/10.1177/0899764006295996
Meyers, D. C., Durlak, J., & Wandersman, A. (2012). What are important components of quality implementation? A Synthesis of Implementation Frameworks. American Journal of Community Psychology, 50(3–4), 481–496.
Padgett, R., Thyer, D. K., & David, B. (2016). Program Evaluation: An Introduction to an Evidence-Based Approach. Cengage Learning.
Patin, G. A. (2013). Program Evaluation in the Nonprofit Sector: An Exploratory Study of Leaders' Perceptions. UNF Graduate Theses and Dissertations. 457. https://digitalcommons.unf.edu/etd/457
Pell Institute. (2010). Improve Program with Evaluation Findings. http://toolkit.pellinstitute.org/evaluation-guide/communicate-improve/improve-program-with-evaluation-findings/
Rossi, P., Lipsey, M., & Henry, G. (2019). Evaluation: A Systematic Approach. Sage Publications, Inc.
Rural Health Innovations. (2016). A Guide to Writing a Program Evaluation Plan. https://www.ruralcenter.org/sites/default/files/Evaluation_Plan_Guide_Allied.pdf
Shapiro, J. (1982). Evaluation As Theory Testing: An Example from Head Start. Educational Evaluation and Policy Analysis, 4, 341–353.
TSNE. (2018). Process Evaluation vs. Outcome Evaluation. https://www.tsne.org/blog/process-evaluation-vs-outcome-evaluation