University of FloridaSolutions for Your Life

Download PDF
Publication #WC118

Team-Based Evaluation of Extension Programs1

Alexa J. Lamm, Amy Harder, Glenn D. Israel, and David Diehl2

A 2010 study of how UF/IFAS county Extension faculty are evaluating their programs revealed that most are measuring success by collecting data at the conclusion of their one-shot activities and annual programs to assess short-term knowledge, skill, attitude, and aspiration changes (Lamm, Israel, Diehl, & Harder, 2011) (see Evaluating Extension Programs, http://edis.ifas.ufl.edu/wc109, for a detailed report of the findings).

Approximately half of those responding to the survey reported collecting data on client behavior changes over time. Although the creation of evaluation plans that measure medium- and long-term impacts is difficult and time consuming, it is necessary to show the public value of Extension programs to those making funding decisions. By obtaining evaluation competencies, county faculty can overcome the challenges many associate with accurately measuring changes in behaviors and social, economic, and environmental outcomes.

Evaluation is essential for showcasing the public value of Extension programs (Franz & Townson, 2008). Benefits of evaluations to individuals working within Extension include (Deshler, 2011)

  • Being recognized for their achievements

  • Offering opportunities for improvement

  • Establishing a level of accountability

  • Improving newly implemented plans

This publication reports the evaluation skills and abilities expressed by county faculty during a survey in the fall of 2010 and offers strategies for enhancing evaluation efforts based on the strengths and weaknesses expressed in the survey.

Developing Evaluation Skills

In order to conduct the evaluations that collect medium- and long-term impacts, a certain level of competency is necessary. To develop a sense of program development and evaluation competence, newly hired county faculty receive training on how to create program plans and matching evaluations, though this is only one component of their training. Through new faculty trainings and work with their district Extension director and Extension faculty mentor, new county faculty are introduced to an overwhelming number of requirements for their positions, are taught how to juggle the needs of county and state stakeholders, and are given large amounts of information the state Extension system deems necessary for their success.

After being introduced to the practice of evaluation, county faculty are asked to develop program plans, create educational objectives, and compile the subject matter necessary to effectively teach their customers. Evaluations are often put on the back burner because of the many requests on their time (Franz & Townson, 2008). Therefore, when the time comes to create evaluations, the majority of county faculty measure success by collecting short-term assessments (most of which are "knowledge gained") of their activities and programs.

Ongoing opportunities for employees to learn from and about evaluation must be offered in order to sustain evaluation behaviors within an organization long term (Preskill & Boyle, 2008). Single trainings have proven to be ineffective in preparing county faculty to conduct rigorous evaluations (Arnold et al., 2008). A professional development approach should encourage county faculty to develop their own evaluation tools by allowing them to build evaluation competencies over time through multiple opportunities for engagement and learning (Kolb, 1984).

Evaluation Competencies

A set of Essential Competencies for Program Evaluators (ECPE) was developed by the American and Canadian Evaluation Associations to encompass the knowledge, skills, and dispositions professionals should have in order to conduct program evaluations effectively (Ghere, King, Stevahn, & Minnema, 2006). The ECPE divides evaluation competencies into six categories:

  1. Professional practice: uses standards when evaluating, is ethical, discusses evaluation approaches with stakeholders, and respects customers

  2. Systematic inquiry: understands the concepts of evaluation, knowledgeable about evaluation methods, develops proper evaluation designs, collects and analyzes data, assesses the validity and reliability of their evaluation methods, makes judgments based on data, and develops recommendations based on results

  3. Situational analysis: determines ability to evaluate a program, identifies the interests of stakeholders, addresses issues that may arise from evaluations, creates evaluation for use, and is open to the input of others

  4. Project management: communicates with clients and stakeholders about evaluation, justifies why evaluation is necessary, uses the appropriate technology, and presents work in a timely matter

  5. Reflective practice: has an awareness of one's program evaluation expertise as well as a personal need for professional growth

  6. Interpersonal competence: has the people skills needed to work with diverse groups of stakeholders to conduct program evaluations

As expressed in the competencies identified by Ghere et al. (2006), just having the skills to create, disseminate, collect, and interpret evaluation information is not enough to conduct program evaluations effectively. County faculty need to be able to communicate why evaluation is necessary to clients and stakeholders, maintain standards to ensure evaluation results are unbiased, work with others when conducting evaluations, and create evaluations that are useful while fulfilling reporting requirements. It takes time to develop the skill set necessary to becoming a well-rounded evaluator.

Reported Evaluation Skills and Abilities of County Extension Faculty

An online survey was sent during the fall of 2010 to all UF/IFAS county Extension faculty (N = 326) in order to gauge their current evaluation skills and abilities. A total of 229 county faculty responded for a response rate of 70%. The survey used a Likert-type scale to assess county faculty members' responses as to how true a set of statements were for them (Table 1).

County faculty reported they had some evaluation skills and abilities, including being open to the input of others when they evaluate (M = 4.38) and using their evaluation results when making decisions about their programs (M = 3.85). This suggests their situational analysis evaluation competencies are strong. County faculty also identified the needs and interests of their community stakeholders prior to developing their programs (M = 4.07), which suggests they have strong professional practice evaluation competencies.

County faculty only somewhat agreed they use logic models when evaluating (M = 2.90), assess the reliability of the data they collect (M = 3.06), or create evaluation plans prior to conducting their programs (M = 3.32). A low level of expression in these three skills indicates that many county faculty need to further develop their systematic inquiry evaluation competencies.

Strategies for Enhancing Competencies and Developing Future Evaluation Efforts

Specific training is necessary to help county faculty improve their systematic inquiry evaluation competencies. Achieving the types of evaluations that measure medium- and long-term impacts requires an intense amount of time. Considering the commitment it takes to develop strong evaluation competencies and the hours of professional development necessary to master these concepts, the question needs to be raised: Is it really necessary that all county faculty be fully competent in the systematic inquiry evaluation competencies?

The data collected, coupled with the authors' collective years of professional experience teaching evaluation practices, suggest that the current model of working to develop evaluation competencies across the entire population of county Extension faculty has not been effective. Competing demands on the time of state specialists and county faculty prevent this model from achieving its objectives. Access to in-depth training has not been sufficient, often because in-service training is eliminated due to budget constraints. To deal with these issues, the authors propose a new model of evaluation leadership, known as the Evaluation Leadership Team (ELT) model.

Several EDIS publications have suggested county faculty work together in teams to alleviate the pressure and time commitment felt when striving to appropriately evaluate their programs (Israel, Diehl, & Galindo-Gonzalez, 2009; Lamm et al., 2011) The team structure is a natural fit for UF/IFAS Extension given its intended use of goal and focus teams (often referred to as work groups). The departure from current practice to the new model requires that county faculty designate an evaluation leader from within each goal and focus team. The designated evaluation leaders would then work as an ELT faculty group with the state evaluation specialists to gain the systematic competencies needed to create and conduct high-level evaluations for their teams. Each evaluation leader would then direct the evaluation efforts within their work groups (Figure 1).

Figure 1. 

The Evaluation Leadership Team model builds networks with programmatic work groups, ELT county faculty, and campus evaluation specialists.


[Click thumbnail to enlarge.]

Adoption of the ELT model would eliminate the need for all county faculty to be experts in evaluation. The development of rigorous evaluation plans would be handled by individuals prepared for that role while allowing the remaining team members to specialize in other areas of expertise.

Launching the ELT model would require

  1. A supply of in-depth evaluation workshops (equivalent to graduate-level college courses)

  2. The commitment of a group of county faculty to complete the required evaluation training

  3. An administrative commitment to providing resources to ELT faculty/teams, including evaluation data collection and analysis tools

  4. Incentives for county faculty to be ELT members

The in-depth workshops might consist of an initial on-campus evaluation training lasting two days, followed by participation in a two-hour online session once a week for three to four months. While acting as the assigned ELT county faculty in a work group, ELT county faculty would be expected to communicate regularly with campus evaluation specialists while planning and implementing their work group's evaluation plans. The county faculty choosing to participate in the ELT would receive focused training and be recognized as evaluation coordinators upon completion.

The authors believe the ELT model would reduce the pressure and time commitment that makes evaluation feel like a necessary evil rather than a tool that can lead to programmatic improvement. It would ensure those leading evaluation efforts at the county level have the background, campus connections, and tools necessary for them to create and conduct high-quality evaluations. In addition, state evaluation specialists would have a group of evaluation leaders they can work with directly to address the specific needs of the work groups they represent. By working in cohesive groups, county and state faculty would also be more likely to collect consistent data that can be used for professional presentations and publications that benefit all parties. Ultimately, the goal of the ELT model is to enhance UF/IFAS Extension's ability to conduct rigorous evaluations that demonstrate Extension's public value.

References

Arnold, M. E., Calvert, M. C., Cater, M. D., Evans, W., LeMenestrel, S., Silliman, B., & Walahoski, J. S. (2008). Evaluating for impact: Educational content for professional development. Washington, DC: National 4-H Learning Priorities Project, Cooperative State Research, Education & Extension Service, USDA.

Deshler, D. (2011). Reading note: Organizational evaluation. Retrieved from http://www.fao.org/docrep/w7510e/w7510e05.htm

Franz, N. K. & Townson, L. (2008). The nature of complex organizations: The case of Cooperative Extension. Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 5–14.

Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation, 27, 108–123.

Israel, G. D., Diehl, D., & Galindo-Gonzalez, S. (2009). Evaluation situations, stakeholders & strategies. Retrieved from http://edis.ifas.ufl.edu/wc090

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall.

Lamm, A. J., Israel, G. D., Diehl, D., & Harder, A. (2011). Evaluating Extension programs. Retrieved from http://edis.ifas.ufl.edu/wc109

Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.

Tables

Table 1. 

County Faculty Perceptions of Their Evaluation Skills and Abilities (N = 229)

 

Mean

Standard deviation

I am open to the input of others when evaluating

4.38

.70

I identify the needs and interests of my community stakeholders prior to developing programs

4.07

.81

I use evaluation results to make decisions about my programs

3.85

.82

I have a strong understanding of the general knowledge base of evaluation (terms, concepts, theories, and assumptions)

3.55

.83

I note the strengths and limitations of my evaluations

3.54

.98

I report evaluation procedures and results to my community stakeholders

3.51

.99

My evaluations serve the information needs of my community stakeholders

3.36

.95

I create an evaluation plan prior to conducting my program

3.32

1.03

I assess the reliability of the data I collect

3.06

1.16

I use a logic model when evaluating

2.90

1.04

Note. Scale: 1 = not at all true for me, 2 = slightly true for me, 3 = somewhat true for me, 4 = mostly true for me, and 5 = completely true for me.

Footnotes

1.

This document is WC118, one of a series of the Agricultural Education and Communication Department, UF/IFAS Extension. Original publication date August 2011. Reviewed August 2014. Visit the EDIS website at http://edis.ifas.ufl.edu.

2.

Alexa J. Lamm, policy research and evaluation specialist, National Public Policy Evaluation Center; Amy Harder, assistant professor, and Glenn D. Israel, professor, Agricultural Education and Communication Department; and David Diehl, assistant professor, Family,Youth, & Community Science Department, UF/IFAS Extension, Gainesville, FL 32611


The Institute of Food and Agricultural Sciences (IFAS) is an Equal Opportunity Institution authorized to provide research, educational information and other services only to individuals and institutions that function with non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, marital status, national origin, political opinions or affiliations. For more information on obtaining other UF/IFAS Extension publications, contact your county's UF/IFAS Extension office.

U.S. Department of Agriculture, UF/IFAS Extension Service, University of Florida, IFAS, Florida A & M University Cooperative Extension Program, and Boards of County Commissioners Cooperating. Nick T. Place, dean for UF/IFAS Extension.