MENU

AskIFAS Powered by EDIS

Program Evaluation Challenges for Early-Career Extension Professionals: What Can You Do to Reduce the Stress?

John M. Diaz and Laura A. Warner

Abstract

The purpose of this article is to outline meaningful strategies to overcome program evaluation challenges that early-career Extension professionals face. The strategies outlined in this article are grounded in the experiences of Extension professionals in three states (Florida, North Carolina and Pennsylvania). In order to provide a manageable framework for agents to use, these strategies center on providing solutions to the challenges that newer Extension professionals felt were the most important to address.

Introduction

Program evaluation is considered a core competency domain for all Extension professionals, with special attention needed at the entry level (Harder, Place & Scheer, 2010). Extension professionals are expected to carry out meaningful evaluations to develop needs-based programs, make programmatic improvements and demonstrate program impacts. An understanding of early-career evaluation challenges can help Extension professionals proactively position themselves from the outset to conduct meaningful evaluations.

Identifying Common Challenges from Extension Professionals in Three States

To identify the most pervasive challenges that early-career Extension professionals face, we conducted a multi-state (Florida, North Carolina and Pennsylvania), three-round Delphi study. The Delphi technique is an effective methodology to move a group toward consensus to address a specific problem (Warner, 2015). We developed a panel of 30 Extension professionals (10 from each state) to participate in the Delphi Study. We defined early-career Extension professionals as those with at least one year of experience but no more than three years.

We used three online surveys to collect data from our panel. The first round of the study included one open-ended item that asked the panel to "list all of the program evaluation challenges that you have faced as a newer Extension agent (program evaluation task(s) or situation(s) that really tests your abilities)." This round was intended to create a comprehensive list of challenges and returned 36 unique challenges for consideration. In the subsequent rounds, we asked the panel to rate the importance of addressing each challenge. Our panel achieved consensus (meaning most assigned high importance) on 27 challenges (Table 1).

The vast majority of the challenges the group identified are related to evaluation preparation and planning, such as (a) determining program impacts and how to measure those, (b) evaluating newly developed programs, (c) developing goals and objectives, (d) understanding how to integrate evaluation into Extension programming, (e) managing the limited time available for evaluation with the demand for evaluation work and (f) lack of understanding of evaluation techniques and where it is best to use them. These challenges seem intuitive and demonstrate the need for early-career agents to focus at least some of their initial efforts, including training and support, on evaluation preparation and planning.

Strategies to Overcome Program Evaluation Challenges

While the vast majority of challenges relate to evaluation preparation and planning, they do not represent all of the challenges. Since we want to develop a manageable framework to consider, we will focus on the preparation and planning challenges that we believe have the most potential to alleviate the issues that early-career Extension professionals face.

Recommendation 1: Think Evaluation from the Beginning

Early-career Extension professionals are often learning about program planning and evaluation, including the amount of time needed to conduct program evaluation and how to schedule evaluation activities. Planning ahead well in advance of report of accomplishment deadlines is an excellent strategy to avoid considerable challenges. Consider evaluation from the very beginning, and seek the appropriate support and training to increase competencies and in turn integrate evaluation into programmatic decisions and thinking. For example, an Extension professional might consider scheduling blocks of time for developing, implementing and completing an evaluation at the same time a program is being scheduled. The availability of the necessary time should be considered when deciding whether or not to conduct a specific program. Planning ahead can help address the lack of knowledge and skills in program evaluation early on.

Recommendation 2: Communicate with Supervisors, Peer Mentors and Advisory Groups to Gain a Clear Understanding of Evaluation Expectations and Resources

Gaining a clear understanding of evaluation expectations and resources is another foundational step to ensuring program evaluation success among early-career Extension agents. Israel, Diehl and Galindo-Gonzalez (2009) provide multiple recommendations that can help enhance this situation, including discussing expectations for evaluating your Extension programs and partnering with focus teams. The authors explain that by integrating discussion about evaluation expectations into meetings with supervisors and advisory committees, the Extension agent will be able to understand and negotiate what they are expected to evaluate and with how much rigor. Additionally, by partnering with focus teams and other agents, early-career Extension agents can develop a peer-driven model to provide clarity on the expectations and resources needed for meaningful program evaluations. For more information on these recommendations, refer to EDIS publication WC090, Evaluation Situations, Stakeholder & Strategies (https://edis.ifas.ufl.edu/wc090).

Recommendation 3: Conduct a Needs Assessment before Developing a Program

To design appropriate goals and objectives of a program, it is important to understand the problem or issue it intends to mitigate. A needs assessment is a systematic process for understanding the current situation, context and problem that an Extension professional's program can address. The process begins by identifying the target audience and key stakeholder groups and collecting information from them using various means (such as surveys, interviews, observation, or focus groups) to inform the creation of the program. A simple seven-step framework can help guide the assessment:

  1. Write objectives. (What is it that you want to learn about the audience?)
  2. Select audience. (Whose needs are you measuring, and to whom will you give the required information?)
  3. Select audience sample. (How will you select a sample of respondents who represent the target audience?)
  4. Collect data. (Will you collect data directly or indirectly from the target audience?)
  5. Choose an instrument. (What techniques will you use?)
  6. Analyze data.
  7. Follow up. (What will you do with the information you gain?)

The information collected through this process can help Extension professionals develop situation statements that can serve as a frame of reference for program goals and objectives. Having conducted a needs assessment is also important for articulating why certain objectives and strategies were selected, which strengthens Extension reporting. The University of Wisconsin Extension service has an easy-to-follow resource that can help organize the data collected from this process to develop situation statements that can effectively guide a program (https://fyi.extension.wisc.edu/programdevelopment/files/2016/03/lmcourseall.pdf; reference pages 25–27).

Recommendation 4: Use the Data Collected from Your Needs Assessment to Create SMART Outcome Objectives

Since an Extension program should be positioned to meet a need, program objectives should represent a pathway to move from the current condition or situation to a desired state. The information to understand this paradigm comes directly from a needs assessment and provides useful data to develop a program theory of change. A program theory of change guides Extension professionals to develop an educational program that moves from learning to doing. As a result of action, the current situation or condition moves to the desired state.

To achieve this paradigm, objectives should cut across the various levels of outcomes, including immediate outcome objectives (knowledge, attitudes, skills and aspirations), intermediate outcomes (behavior change/practice adoption) and long-term impacts (social, environmental and economic conditions). Often, a number of objectives need to be met at different levels to address the problem a program addresses. To measure success, the objectives need to be specific, measurable, attainable, relevant and time-bound (SMART). Information on the SMART framework can be found in EDIS publication FE577, Developing SMART Goals for Your Organization (https://edis.ifas.ufl.edu/fe577).

Recommendation 5: Use SMART Objectives to Guide Evaluation Design and Instrument Development

Since program objectives should guide program development and evaluation, they should be created before the program is delivered. Two questions should provide resonance to this statement:

  • How can I create an intentional program if I don't know what I want to change; and
  • How do I know if my program is successful?

The SMART objective framework provides a clear picture of the intended outcomes of the program, which are helpful in guiding program development and evaluation design. If crafted correctly, it provides a clear picture of what program success looks like and thus should guide associated measurement and adjustments. If this framework is used incorrectly or not at all, it is more difficult to understand the specific expectations of the program, when outcomes are expected to occur and how to measure success. Again, paying attention to these details in advance can support more intentional and articulate programmatic reporting.

Let's review an example using a residential horticulture program focused on water conservation. The example program was developed to mitigate the issue of communities overusing their water resources to maintain their lush, green grass because of a lack of knowledge of water conservation technologies. Table 2 outlines examples of three associated objectives using the framework from Recommendation 3 to guide program design and evaluation. These objectives are not intended to demonstrate the only acceptable solution to water conservation but rather to provide a simple example to follow for our recommendation.

The objectives above call for an evaluation design that collects data directly following program activity as well as 6 months and 1 year after program engagement. The objectives also demonstrate the nature of the data collected. The program will use surveys that ask participants to report knowledge of smart irrigation gained (directly following activity; see methods in EDIS publication WC135, Capturing Change: Comparing Pretest-Posttest and Retrospective Evaluation Methods [https://edis.ifas.ufl.edu/pdffiles/wc/wc13500.pdf]), adoption of smart irrigation (6 months later) and gallons of water savings (1 year later).

As you can see from the example, this framework can explicitly outline when data collection should occur, thus effectively guiding the overall program evaluation design. Additionally, it should promote focused evaluation instruments that measure variables important for the vision of success for the program.

Summary

Extension professionals may find the list of common early-career evaluation challenges presented in this document, along with some key recommendations, as helpful tools early in their Extension careers. More experienced Extension professionals may find this information to be a useful refresher, and they may find this document helps them as they transition into mentorship roles for newer agents. Finally, aspiring Extension professionals may use this information to plan career preparation activities so they can anticipate a great start in a rewarding occupation.

References

Cothran, H., Wysocki, A., Farnworth, D. & Clark, J. 2005. Developing SMART Goals for Your Organization. FE577. Gainesville: University of Florida Institute of Food and Agricultural Sciences. Retrieved from https://edis.ifas.ufl.edu/fe577

Gouldthorpe, J.L., & Israel, G.D. Capturing Change: Comparing Pretest-Posttest and Retrospective Evaluation Methods. WC135. Gainesville: University of Florida Institute of Food and Agricultural Sciences. Retrieved from https://edis.ifas.ufl.edu/wc135

Harder, A., Place, N. T., & Scheer, S. D. (2010). Towards a competency-based extension education curriculum: A Delphi study. Journal of Agricultural Education, 51(3), 44-52. doi:10. 5032/jae.2010.03044

Israel, G., Diehl, D, & Galindo-Gonzalez, S. (2009). Evaluation Situations, Stakeholders & Strategies. WC090. Gainesville: University of Florida Institute of Food and Agricultural Sciences. Retrieved from https://edis.ifas.ufl.edu/wc090

University of Wisconsin Extension. (2003). Enhancing Program Performance with Logic Models. Retrieved from https://fyi.extension.wisc.edu/programdevelopment/files/2016/03/lmcourseall.pdf

Warner, L. A. (2015). Using the Delphi Technique to Achieve Consensus: A Tool for Guiding Extension Programs. WC183. Gainesville: University of Florida Institute of Food and Agricultural Sciences. Retrieved from https://edis.ifas.ufl.edu/wc183

Table 1. 

Important challenges faced by newer Extension agents.

Item

Percentage

Determining program impacts and how to measure those

100

Development of accurate evaluation instrument for a given situation

90

Evaluating newly developed programs

90

Management and analysis of data collected

90

Evaluating long-term impacts of Extension programming

90

Developing goals and objectives

86

Understanding how to integrate evaluation into Extension programming

86

Challenges with the evaluation reporting system (i.e., reporting outcomes, structure, time frame of reporting)

86

Managing the limited time available for evaluation with the demand for evaluation work

86

Reporting on evaluation results

86

Understanding what outcomes can be reported in multiple areas

85

Difficulty in designing evaluation and collecting evaluation data from the participants of site visits, field days, exhibits, farm demonstrations, etc.

83

Evaluating behavior change

83

Lack of understanding of evaluation techniques and where it is best to use them

83

Maintaining engagement in evaluation among participants and staff that have done it many times before

83

Evaluating cost saving or return on investment

79

Getting Extension participants to respond for evaluation surveys

79

Getting in touch with participants for receiving feedback

79

Connecting evaluation to statewide initiatives and priorities

79

Identifying impact indicators

76

Conducting pretest, posttest evaluation

76

Development and implementation of follow-up evaluation

76

Evaluating participants that have already adopted the intended behavior/practice

76

Measuring how Extension program prevented unwanted outcomes (e.g., reduced childhood obesity)

75

Disseminating evaluation results to key stakeholders such as federal and state agencies as well as other organizations

72

Evaluating programs that have an extensive set of expected outcomes

72

Attaining acceptable participation to strengthen evaluation results

72

Table 2. 

Example SMART objectives.

Immediate Outcome Objective

At least 75% of program participants will report increased knowledge of smart-irrigation technologies following a program activity as measured by retrospective pre/posttest

Intermediate Outcome Objective

At least 45% of program participants will report the adoption of at least 1 smart-irrigation technology within 6 months following program engagement as measured by follow-up survey

Long-Term Impact Objective

At least 25% of program participants will report saving at least 50 gallons of water each month within 1 year of program engagement as measured by follow-up survey

 
Peer Reviewed

Publication #AEC672

Release Date:May 7, 2019

Reviewed At:May 4, 2022

Related Experts

Warner, Laura A.

Specialist/SSA/RSA

University of Florida

Diaz, John M.

Specialist/SSA/RSA

University of Florida

Related Topics

Fact Sheet

About this Publication

This document is AEC672, one of a series of the Department of Agricultural Education and Communication, UF/IFAS Extension. Original publication date May 2019. Visit the EDIS website at https://edis.ifas.ufl.edu for the currently supported version of this publication.

About the Authors

John M. Diaz, assistant professor, and Laura A. Warner, assistant professor, Department of Agricultural Education and Communication; UF/IFAS Extension, Gainesville, FL 32611.

Contacts

  • John Diaz