University of FloridaSolutions for Your Life

Download PDF
Publication #WC109

Evaluating Extension Programs1

Alexa J. Lamm, Glenn D. Israel, David Diehl, and Amy Harder2

Evaluation has always been a part of Extension program implementation. However, most agents view it as a necessary evil rather than an opportunity to identify and document accomplishments and discover ways to strengthen the impact of the program (Agnew & Foster, 1991). This publication defines evaluation, explains why evaluation is important to Extension programming beyond accountability requirements, describes how UF/IFAS Extension agents are currently evaluating their programs, and makes suggestions for future evaluation efforts that will showcase the value of Extension programming to the public.

Defining Evaluation

Evaluation is one way to measure, examine, and report perceived value or programmatic impact. Evaluation has been defined as systematically determining something's merit, worth, value, quality, or significance (Davidson, 2005; Fournier, 2005; Schwandt, 2002; Stufflebeam, 2001). Patton (2008) described evaluation as "assessing what was intended (goals and objectives), what happened that was unintended, what was actually implemented, and what outcomes and results were achieved" (p. 5). Rossi, Lipsey, and Freeman (2004) stated program evaluation is "the use of social research methods to systematically investigate the effectiveness of programs" (p. 431).

Importance of Evaluation Beyond Accountability

Extension agents often feel pressure to evaluate their programs because of state and federal reporting requirements. With the passing of the Government Performance Review Act (GPRA) and the Agricultural Research, Extension, and Education Reform Act (AREERA), evaluation capacity-building within state Extension systems was added to the professional development agenda, and agents were encouraged to evaluate for accountability purposes (Franz & Townson, 2008). However, accountability-driven evaluations place an emphasis on looking back and judging programs by placing blame or praise; they leave little room for learning (Patton, 2008). Therefore, evaluation efforts have been minimally enhanced within Extension and, on a federal level, the Cooperative Extension System (CES) continues to generate and report basic information on contacts made and reactions to programs, rather than on behavior changes (medium-term changes) and social, economic, and environmental (SEE) condition (long-term) changes (Franz & Townson, 2008).

The focus of evaluations within Extension needs to be shifted away from accountability-driven evaluations. Agents should consider designing and using their evaluations to gain an understanding of the successes and challenges they face so as to guide future activities in a positive way (Cronbach, 2000). Focusing evaluation efforts on using results for programmatic improvement will create an atmosphere that encourages organizational thinking and learning while collecting the information needed for accountability reports. To do this, agents need to believe that evaluations designed for "learning means the willingness to go slowly, to try things out, and to collect information about the effects of actions, including the crucial but not always welcome information that the action is not working" (Meadows, Randers, & Meadows, 2004, p. 7).

An example of focusing evaluations on use would be a basic assessment of an Extension program or tool. For example, in Arkansas, a farm pond management website was evaluated over a 4-year period (Neal, 2010). Responses to an online survey were collected, identifying improvements that could be made to the website. It also revealed that the website was the stakeholders' preferred method of communication. As a result, the other, more traditional Extension methods used for communication with these stakeholders were discontinued (Neal, 2010). For accountability purposes, the data collected examined how many people were using the website, but the primary focus was on how the website could be improved to the point that face-to-face communication methods were no longer necessary.

Evaluation in UF/IFAS Extension

In order to gauge what evaluation practices Extension agents were engaging in, an online survey was sent to the 326 agents employed by UF/IFAS Extension during the fall of 2010. A total of 229 responded for a response rate of 70%. In the survey, agents were asked to respond to a series of questions related to how they evaluated their "best" or "most important" Extension program the previous year. The results can be viewed in Figure 1.

Based on the survey, 88% of the agents in Florida are evaluating their programs in some way. The majority are keeping program participation records (85%) and conducting posttests of their activities (73%). While it is a positive result that data are being collected in some capacity, just over half of the agents reported collecting data on the behavior changes resulting from their programs. In addition, fewer than 40% of agents are collecting data on how their programs are making SEE condition changes.

Figure 1. 

Participants' data collection methods (N = 229). (Click to enlarge.)

[Click thumbnail to enlarge.]

When reviewing how agents are analyzing and reporting their results, most are reporting the actual number of customers attending their activities or programs (83%) (Figure 2). Moving a step beyond this basic measurement, 61% of agents are reporting means and percentages of responses to specific evaluation items. In addition, summaries of the artifacts being collected as a result of an Extension program are being reported by just over half of the agents (52%). Very few agents reported standard deviations (10%), a statistical technique used to show the degree of variation in participants' responses. Even fewer agents used comparison groups as a control to see if the participants in their programs were learning more than someone who did not attend or participate in their programs.

Figure 2. 

Participants' data analysis and reporting methods (N = 229). (Click to enlarge.)

[Click thumbnail to enlarge.]

Suggestions for Future Evaluation Efforts

The results of the survey show UF/IFAS Extension agents are evaluating their programs. However, most agents are collecting data at the conclusion of their one-shot activities and annual programs to assess short-term knowledge, skill, attitude, and aspiration changes to measure their level of success. Only 51% reported collecting any form of data on behavior changes within their participants over time. Unfortunately, creating evaluation plans that measure medium- and long-term impacts is more difficult and time consuming. To do this, an agent must collect information from participants on multiple occasions. At the conclusion of a program, information can be collected on what participants intend to do, but unless they are also contacted 3 months, 6 months, or a year after the program, there is no way of knowing if they have actually changed their behavior.

In terms of SEE condition changes, previous research has shown that it is easier to make a change immediately than to sustain it over time (Kotter, 1996). To examine SEE condition impacts, agents must be able to show that the change they are trying to encourage (i.e., implementation of best practices) is still being implemented and making impacts long after their program has concluded. Evaluations of SEE condition changes should be viewed as long-term assessments of how effective an Extension program is on a community and not necessarily just the individual. To develop Extension programs that have this kind of impact, agents must understand the direct and indirect effects their educational efforts are having on a larger scale. Questions to ask long after the program is over include:

  • How are my educational programs impacting the community?

  • Would adjusting efforts make a larger impact?

  • Should I be following up with my participants more often to ensure they are using the information I have given them?

  • How can I encourage my participants to use the information they are receiving to make a larger impact on their communities?

Designing medium- and long-term impact evaluation questions that can be used to make programmatic improvements can answer these questions while contributing to the reported medium- and long-term impacts of Extension programs.

It is not easy to develop evaluations that are useful while collecting medium- and long-term impacts. Extension agents need to engage in professional development opportunities that focus on building the skill sets needed to measure medium- and long-term change. Professional development efforts of this type include creating detailed logic models that connect programming with specific measureable outcomes, understanding how to develop instruments that measure behavior and SEE condition changes, and learning how to conduct data analysis that can show change over time.

Evaluations that measure behavior and SEE condition changes may be more attainable when working with a group rather than as a single agent. The group can achieve economies of scale as agents contribute to parts of the evaluation without being responsible for the entire project. Extension agents should consider devising program plans as a team that address the same specific measureable outcomes. By working together, the task of creating detailed plans, establishing instruments to measure behavior and SEE condition changes, and conducting data analysis for reporting purposes can be disseminated across group members, thus alleviating the pressure and time commitment felt by a single individual.


Agnew, D. M., & Foster, R. (1991). National trends in programming, preparation and staffing of county level Cooperative Extension service offices as identified by state Extension directors. Journal of Agricultural Education, 32, 47–53.

Cronbach, L. (2000). Course improvement through evaluation. In D. I. Stufflebeam, G. F. Madoux, & T. Kellaghan (Eds.), Evaluation models (pp. 16–32). Boston, MA: Kluwer Academic Publishers.

Davidson, J. E. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.

Fournier, D. M. (2005). Evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 139–140). Thousand Oaks, CA: Sage.

Franz, N. K., & Townson, L. (2008). The nature of complex organizations: The case of Cooperative Extension. In M. T. Braverman, M. Engle, M. E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 5–14.

Kotter, J. P. (1996). Leading change. Boston, MA: Harvard Business School Press.

Meadows, D., Randers, J., & Meadows, D. (2004). Limits to growth. White River Junction, VT: Chelsea Green Publishing.

Neal, J. W. (2010). The first 4 years of a warmwater recreational pond management website in Arkansas. Journal of Extension, 48(3). Retrieved from

Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.

Schwandt, T. A. (2002). Evaluation practice reconsidered. New York, NY: Peter Lang.

Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98.



This document is WC109, one of a series of the Agricultural Education and Communication Department, UF/IFAS Extension. Original publication date May 2011. Reviewed March 2014. Visit the EDIS website at


Alexa J. Lamm, postdoctoral associate; Glenn D. Israel, professor; and Amy Harder, assistant professor, Agricultural Education and Communication. David Diehl, assistant professor, Family,Youth, and Community Science. UF/IFAS Extension, Gainesville, FL 32611.

The Institute of Food and Agricultural Sciences (IFAS) is an Equal Opportunity Institution authorized to provide research, educational information and other services only to individuals and institutions that function with non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, marital status, national origin, political opinions or affiliations. For more information on obtaining other UF/IFAS Extension publications, contact your county's UF/IFAS Extension office.

U.S. Department of Agriculture, UF/IFAS Extension Service, University of Florida, IFAS, Florida A & M University Cooperative Extension Program, and Boards of County Commissioners Cooperating. Nick T. Place, dean for UF/IFAS Extension.