MENU

AskIFAS Powered by EDIS

Phases of Data Analysis

Glenn D. Israel

There are a number of phases or steps to follow in analyzing data for an evaluation of an Extension program. The number of phases that an individual conducts will depend on the purpose of the evaluation, as well as available time and resources. Extension faculty who conduct less rigorous evaluations will use fewer steps while those with more rigorous evaluations will follow more steps. The following provides an overview of the phases in this process.

This paper assumes that the data analysis will focus on a limited number of variables. For studies which involve a large number of measures, the reader may wish to consult Analyzing Survey Data (Israel 2009) or The Savvy Survey #16: Data Analysis and Survey Results (Newberry et al. 2014) for suggestions about analyzing larger data sets.

Screen the Data for Errors

The validity of the findings of any data analysis depends on the quality of the data set. It is important to identify and correct data handling errors for this reason. Coding and entering data into computer files or onto tally sheets creates opportunities to make errors.

If data is typed into a computer file using a spreadsheet program (e.g., Microsoft Excel), errors can occur by inserting data in the wrong column. This can cause a whole line of data to be misaligned. Correcting this type of problem will remove errors for a number of variables in one step.

Many of these errors (but not necessarily all) can be identified with the following techniques:

  • Look for numbers outside the range of acceptable values in the frequency distributions. For example, if the adoption of a practice is coded 1 and nonadoption is coded 0, any other value (e.g., 3 or 9) that appears in a frequency distribution for that variable indicates a coding error.
  • Examine combinations of variables to make sure that they add up or that they are logically consistent. For example, acres of pasture, cropland, timber, and other land uses should add up to the total number of acres in a farm.

Once this preliminary step has been completed, the data is ready for analysis.

Select Impact Indicators

After screening your data for errors, the next step in the data analysis process is to select the key indicators for the analysis. By selecting a limited number of key indicators from the pool of available variables, the analysis can be more easily focused.

First, identify the key indicators or impact variables that are included as measurable objectives in your Plan of Work. Recall that specific items of data are used to describe the current situation and the preferred (or realistic potential) situation in an Extension plan of work. These data may be the same key indicators that should be used for the evaluation.

For each key indicator, determine whether pre-program and post-program data are available or post-program data only. The latter may be such that clientele report retrospectively about whether they adopted a practice or changed some behavior.

Measuring Change in Key Indicators

Once the key indicators have been selected, the next step is to address the question of how much change is there. The purpose is to quantitatively describe the amount of change in one or more measures.

  • For example, if an Integrated Pest Management (IPM) program is being evaluated with pre- and post-program data, you might compare the mean cost of spraying before the program started (T1) with the mean after the program (T2) or you might compare the percent using scouting at T1 with the percent at T2. (A scatter plot also can be used to show the change from pre- to post-program.)

  • If you were evaluating an IPM program with post-program (retrospective) data, you might estimate the average savings in the cost of spraying that farmers report or you might calculate the percent adopting scouting or some other practice.

Statistical Significance of Change

Once you have measured whether change has occurred in your key evaluation indicators, you may want to answer one of the following questions: Is the change large? Is it statistically significant? Is it more than we might expect by chance alone?

  • For example, to test if the cost of spraying has significantly decreased, you can use a T-test with pre-program and post-program data. A large T statistic (i.e., 1.96 or larger) means that the average costs are different between the start and the end of the program.
  • To test if the percent using scouting has significantly changed, you can use the Chi-square statistic. A large Chi-square (e.g., 3.84 for 1 degree of freedom) would be interpreted to say that the percent using scouting after the program is significantly different from the percent before the program.

If there is no change, or if the change is not significantly different from zero, should we conclude that the program had no effect or that there is no change? Not necessarily, because other factors outside of your Extension program may hide the true impact of your program.

Association between Change and Extension Programs

The next step in the analysis process is to examine the association between change in the key indicators (or impact variables) and indicators of involvement by clientele in Extension programs. This step adds more rigor to your analysis and increases the credibility of statements about impacts of your Extension program.

After looking at differences between pre- and post-program scores for an impact indicator, this step examines the relationship between a program variable and change in the impact indicator. In this context, a program variable refers to measuring whether an individual participated in an Extension program, the number of programs in which a person participated, or the types of programs (such as workshops, demonstrations, learn by mail, etc.). By including one or more program variables in the analysis, a comparison of groups can be made (Rivera et al. 1983).

Comparison Groups

There are two types of comparison groups used in evaluation studies. These are between groups and within groups comparisons.

Between Groups Comparison

A between groups comparison is used to examine the difference between program participants and nonparticipants1 with regard to each group's score for an impact indicator. When the change in the impact indicator is larger for program participants than for nonparticipants, there is further evidence to support a conclusion that the Extension program is effective.

Within Groups Comparison

A within groups comparison uses only program participants to form the groups. One type of comparison is between those more intensively involved in the program and those who were less involved. For example, the number educational activities that clientele attended can be the basis of a within groups comparison. Finding a difference in the impact indicator scores between highly involved participants and less involved would allow the evaluator to refine his or her conclusions about the program's effect.

A second type of within groups comparison is based on those involved in one type of educational activity and those involved in a different type of activity. For example, clientele who attend demonstrations can be compared with those who use learn by mail educational material or those who watch a video on the same subject. Comparing methods of program delivery can show which are more effective, and this can be used to improve the program.

Statistics Used for Comparing Groups

The previous section reviewed the logic of using comparison groups to evaluate an Extension program. In this section, several statistics which can be used to describe and test the significance of difference between groups are discussed. Several types of statistics that can be used for comparisons are listed below. There also are a number of other statistics not mentioned here. In addition, the data should be examined to see if the assumptions associated with specific statistical techniques can be met before using those statistics.

  • A t-test can be used to compare the average score of program participants with the average score for nonparticipants. The t-test is used with variables measured at the interval or ratio level, such as income or a test score.
  • A Chi-square test can be used to compare the percent of participants who have had KASA change or practice change (Bennett, 1979) with the percent for nonparticipants. The Chi-square test is used for nominal or ordinal level measures of impact.
  • A Chi-square test also can be used to compare change in impact indicators between program participants who were involved through one type of delivery method (e.g., workshops) and participants involved through another method. Again, the impact indicators involving the use of Chi-square are nominal or ordinal level variables.
  • Correlation and regression coefficients can be used to examine whether the score of the impact indicator increases with an increase in the extent of program participation (e.g., the number of educational activities attended).
  • Logistic regression can test if the probability of adopting or using a practice increases with either an increase in participation (e.g., the number of educational activities attended) or with the type of delivery method (e.g., meetings, workshops, learn by mail, etc.).

If we find that the program variable shows no association with the impact variable, does this mean that the program had no effect? Again, the answer is not necessarily. Activities by other agencies or other factors can hide the association between your program's activities and changes made by clientele.

Elaboration Program Impacts

The final and most complex phase of the data analysis involves examining the effect of your program on changes in impact indicators while controlling for the effects of other factors (see, for example, Israel, Easton, and Knox, 1999). An analysis which shows significant effects on clientele behavior due to the program and not other factors further increases the credibility of conclusions about program impact.

This final phase of the analysis involves elaborating on the relationship between the program and impact variables. The purpose of elaboration is two-fold:

  • To address alternative explanations for the change. If change occurs, is it the result of the program or of other causes?
  • To better understand under what conditions the program works best and how that the program can be improved. In essence, elaboration helps us clarify the relationship between program variables and impact indicators because we take into consideration the context or environment in which the program occurs. The elaborating process is more fully explained in Elaborating Program Impacts through Data Analysis (Israel 2009). Analysis in this phase involves the use of either tabular analysis or multi-variate statistical techniques, such as regression, analysis of covariance, and logistic regression.

In general, evaluations that undertake the elaboration process require the resources of more than one county Extension professional. This type of evaluation is more practical when a number of county and state Extension faculty can pool their resources to collect data with the necessary detail and number of observations. Using a team approach in cooperation with other Extension professionals to evaluate one of your major programs can yield a wealth of information that can help you improve your educational program for greater impact.

Endnote

1.The nonparticipant group is usually called a control group.

References

Bennett, C. F. 1979. Analyzing Impacts of Extension Programs. Science and Education Administration, USDA.

Israel, G. D. 2009. Elaborating Program Impacts through Data Analysis. Program Evaluation and Organizational Development, IFAS, University of Florida. PEOD-3, September. https://edis.ifas.ufl.edu/publication/PD003

Israel, G. D. 2009. (PEOD-8,) Analyzing Survey Data. Gainesville: University of Florida Institute of Food and Agricultural Sciences.

Israel, G. D., Easton, J. O., & Knox, G. W. 1999.  Adoption of Landscape Management Practices by Florida Citizens.  HortTechnology, 9(2): 262-266. https://doi.org/10.21273/HORTTECH.9.2.262

Newberry, III, M. G., J. G. O’ Leary, and G. D. Israel. 2014. The Savvy Survey #16: Data Analysis and Survey Results. AEC409. Gainesville: University of Florida Institute of Food and Agricultural Sciences. Available at: https://edis.ifas.ufl.edu/publication/PD080

Rivera, W. A., C. F. Bennett, and S. M. Walker. 1983. Designing Studies of Extension Program Results: A Resource for Program Leaders and Specialists. Cooperative Extension Service, University of Maryland and ES-USDA.

Publication #PEOD1

Release Date:September 13, 2021

Related Experts

Israel, Glenn D.

Specialist/SSA/RSA

University of Florida

  • Critical Issue: Other
Fact Sheet

About this Publication

This document is PEOD1, one of a series of the Agricultural Education and Communication Department, UF/IFAS Extension. Original publication date September 1992. Revised June 2015, June 2018, and June 2021. Visit the EDIS website at https://edis.ifas.ufl.edu for the currently supported version of this publication.

About the Authors

Glenn D. Israel, professor, Department of Agricultural Education and Communication, and Extension specialist, Program Development and Evaluation Center, UF/IFAS Extension, Gainesville, FL 32611.

Contacts

  • Glenn Israel