University of FloridaSolutions for Your Life

Download PDF
Publication #FOR291

Lessons Learned from Evaluations of Citizen Science Programs1

Luke Gommerman and Martha C. Monroe2

Extension agents with an interest in increasing the scientific and environmental awareness of their constituents may find an answer through a form of participatory scientific research known as citizen science. Citizen science uses volunteers of all ages, professions, backgrounds, and skills—often across broad geographic areas—to engage non-scientists in a variety of tasks, but most commonly data collection. Programs incorporating citizen scientists have existed for decades and recently have grown in popularity among the scientific community, both within the United States and internationally. Examples of data collected by citizen science programs include water quality parameters, sightings of birds or invasive species, and reports of phenological events including first observed flower blooms and arrival of migrating species. The number and geographic extent of volunteers in citizen science programs can vary greatly; one study of local water quality involved 12 high school students in Hamilton, Ontario, Canada, while the Audubon Society's Annual Christmas Bird Count attracts over 60,000 observers across the United States (Au et al. 2000; Cohn 2008). Citizen science programs are also being designed for use in developing countries where the need for education is great and travel cost and logistical demands may constrain traditional research opportunities (Braschler 2009).

Both volunteers and researchers can potentially benefit from citizen science programs (Figure 1.) Volunteers can increase their knowledge and understanding of the scientific process, gain deeper understanding of natural phenomena and issues of local importance, strengthen their attitudes toward their natural environment, and participate in making science-based recommendations. Citizen science programs can also provide scientists with an opportunity to increase public awareness concerning their areas of study across local or global scales and can make it possible to answer research questions that require observations spread over time or space or that otherwise would not have sufficient resources to address. Additionally, Couvet et al. (2008) note that the longevity of some well-designed, self-sustaining citizen science programs exceeds fifty years, giving researchers an invaluable, broad temporal perspective.

Figure 1. 

Citizen scientist volunteers assist in the installation of groundwater monitoring wells.


L. Gommerman

[Click thumbnail to enlarge.]

This fact sheet was written to inform potential citizen science practitioners of recent evaluations of citizen science programs. Looking closely at identifying appropriate tasks for volunteers, assessing data validity, and evaluating changes in volunteers’ knowledge and attitudes, can help organizers avoid common pitfalls and develop citizen science programs most likely to succeed.

What Contexts Are Most Appropriate for Citizen Scientists?

Attributes of scientific projects that appear to be most suitable for using citizen scientists are found in Table 1. These include projects where a large number of data collectors will make the research study more feasible, for instance, when data collection is labor-intensive (“Many hands make light work”) or when it involves field-based activities over extensive spatial and temporal scales, as would be the case in showing changes in bird migration patterns in response to climate change. Citizen scientists can provide nearly instantaneous observations across thousands of miles (Lepage and Francis 2002).

The characteristics identified in Table 1 can make large-scale and smaller, local research projects most suitable for citizen scientists. Data collected from field observations are relevant and often interesting; well-designed protocols, training materials, and professional assistance help ensure the reliability of the data (Cohn 2008; Haag 2005). Internet data entry systems enable people from a large geographic area to be involved. They help make citizen science projects more relevant and the resulting data more valid. In addition, participants can access initial results and see how their data are being used, which has been found to encourage continued involvement with a project (Gorman 2001: Silvertown 2009).

Table 1. 

Attributes of research projects ideally suited for citizen science.

Data collection is labor intensive

Data are collected from field situations

Quantitative measurements/observations are needed

Protocols are well designed and easy to learn and execute

Spatial and/or temporal extents are broad

Internet-accessible data submission and results acquisition are possible

Guide materials and/or professional assistance are available

Large data sets are needed

Are Data Collected by Citizen Scientists Valid?

Scientists, reviewers, and decision makers may question the validity of data gathered by volunteers. Evaluations of the data validity of citizen science projects have largely been based on comparisons of volunteer data with those of data obtained by professional staff. The majority of studies indicate that, when given proper training and materials, volunteers can collect data comparable to data collected by professional scientists (Au et al. 2000; Canfield et al. 2002; Fore et al. 2001; Delaney et al. 2008). In addition, while technical data (e.g., species identification) may be sacrificed with volunteers, more general monitoring parameters (e.g., counts of all species or conspicuous species) can be accurately obtained (Haag, 2005; Newman et al., 2010). In some cases, new and equally reliable protocols are developed that enable citizens to hold samples for professional analysis, such as chemical measurements in lakes taken by LAKEWATCH volunteers (Canfield et al. 2002). Evaluations have also revealed that quantitative measurements appear more reliable than qualitative assessments made by citizen scientists. Such findings are exemplified in a transect study of white oak forests in Oregon by Galloway et al. (2006). Here, while quantitative measurements (e.g., tree abundance, diameter at breast height) made by groups of students in grades 3–5 and 6–10 were statistically similar to those obtained by professionals working for federal agencies, students’ assessments of more subjective assays, including tree dead/alive status or tree crown shape, did not match with professionals’ assessments.

An alternative method for evaluation compares two or more citizen science programs that monitor similar parameters, with the assumption that comparable results lend credence to one another. One such comparison was conducted in Lepage and Francis’s (2002) formative evaluation of two Cornell Laboratory of Ornithology projects: Christmas Bird Counts and Project Feeder Watch. Results indicated that the two projects obtained statistically similar population trends for approximately 80% of the bird species observed by both studies, leading to a conclusion that both programs appear to be recording accurate population demographics for a majority of bird species.

Trained staff accompanying citizen scientists can help to ensure data validity. When this is not possible, many projects have successfully enhanced data validity by offering training sessions, guidebooks, and clear instructions for carrying out protocols (Canfield et al. 2002). Ultimately, scientists are responsible for carefully analyzing data obtained by volunteers and need to be prepared to remove suspicious data sets.

All guidance and training materials used for citizen science programs should be pilot tested before they are sent out to ensure that these clearly communicate protocols or the purpose of the study to volunteers. A pilot test of a citizen science project reported by Phillips (2008) to survey European honey bee colonies affected by Colony Collapse Disorder revealed an interesting problem facing eager citizen scientists. The protocol required volunteers to record the appearance of bees visiting a sunflower over a 30-minute time period. If they did not see any bees, however, volunteers noted feeling that they were failing to fulfill their scientific obligation and lost motivation to continue making observations. This was alarming to the organizers, because an observation of “no bees” is important for the study. The protocol was modified to instead record the arrival time of bees at the sunflower for 30 minutes or until five bees were observed, whichever came first. The protocol was revised again to limit observations to 15 minutes, recording only the arrival time of each bee to a blooming sunflower (see new protocols at

Citizen science programs often attract volunteers who are already highly interested in the subject matter (e.g., birders, butterfly enthusiasts). The prospect of obtaining additional information and being a part of new discoveries may motivate their participation. It may be helpful to provide information about scientific processes and new findings to promote effective scientific understanding (Brossard et al. 2005). Galloway et al. (2006) also observed that participants’ preconceptions may lead to biases in data acquisition and misinterpretation of results. These preconceptions should be addressed during training or within guidance material so that participants are aware of the importance of following study protocols, such as randomized sampling, before they begin collecting data. Citizen scientists who feel they are making contributions to the scientific study return higher quality data; hence, contact between professionals and volunteers appears to enhance the power of the study (Nerbonne and Nelson, 2008). Development of on-line resources can provide a forum for volunteers to communicate with scientists or other volunteers. In addition, websites can provide auxiliary learning materials and allow volunteers to see that their observations are being used, both of which can provide additional motivation for continued participation.

Do Citizen Scientists Increase Their Scientific or Environmental Literacy?

Evaluations that assess whether citizen scientists experience knowledge gains, improve their understanding of scientific processes, or change their attitudes toward the environment or science are also available. For these studies, pre- and post-tests and responses from surveys, interviews, and letters are often used to measure program impacts. For example, Trumbull et al. (2000) examined the responses of 700 citizen scientists involved in Cornell Laboratory of Ornithology’s Seed Preference Test and observed that 78% of respondents undertook some scientific thinking processes (ranging from making observations to developing and testing their own hypotheses) during the course of their participation in the project. Similarly, focused scientific studies conducted by students of the Garden Mosaics Program, in addition to interaction with more experienced gardeners, led to knowledge gains in local ecology and gardening (Krasny and Tidball, 2009). Additionally, students participating in another Cornell Laboratory of Ornithology citizen science project found an evolution of student attitudes, ultimately leading to students thinking of themselves as scientists (Lewenstein 2001).

Evaluations have also revealed ways in which citizen science programs can be improved. While an evaluation of the Smithsonian Ornithology Neighborhood Nestwatch project by Evans (2005) indicated some large increases in knowledge gain and an enhanced sense of place among participants, there was little evidence of increased knowledge regarding scientific processes. Brossard et al. (2005) determined that while participating citizen scientists exhibited increases in knowledge specific to bird biology, they did not show significant attitude changes regarding the environment and science. These results may be complicated by the fact that participants demonstrated great concern for environmental issues before they participated in the project, and that there were large numbers of “undecided” responses toward scientific perception questions, hinting at more complex feelings toward science than previously thought. An evaluation of middle school students by Trumbull et al. (2005) observed only modest increases or unchanged understanding of scientific methodology. These insights can reveal where potential disconnects exist between citizens and scientists and suggest how to improve scientific or environmental understanding in the design of a project.

Clearly, the benefits of participation in citizen science programs can make these projects important strategies for enhancing public science literacy and perhaps achieving greater consensus on science-based policies. The demographics of citizen scientists, however, appear to be slanted toward older individuals who are highly educated and considered least in need of development regarding scientific understanding, environmental awareness, and skill advancement (Trumbull et al., 2000). Indeed, this is the population more likely to volunteer for many community programs. This limitation is even more pronounced in developing countries, and more ironic since these areas are often home to richly diverse and poorly understood ecosystems most in need of study (Braschler 2009) (See also for analysis of the context and work in less developed countries.).

To enable more people to gain the benefits of participating in citizen science, and to improve decisions and recommendations by incorporating more voices, citizen science programs should increase efforts to diversify the ranks of their volunteers. Actively reaching out to new populations, such as secondary students, youth groups, or faith-based communities, may attract a broader base of participants than is possible when a project relies on volunteers who already have the interest and time needed. Working through civic organizations or employers may be another strategy to reach beyond the core of those interested in environmental data. Large-scale programs are possible and can reap community benefits. Analyses of community-based monitoring programs in India, Tanzania, and Madagascar demonstrate the potential for local citizens to document and offset their carbon emissions through sustainable resource management (Danielsen et al. 2011).


Evaluations of citizen science programs reviewed here reveal that there are numerous potential benefits of citizen science. Citizen science projects can provide scientists with important and reliable data while citizen scientists can develop increased scientific and environmental understanding. However, many citizen science programs have yet to be evaluated for these attributes. Efforts to conduct more evaluations of the attainment of science and education goals are necessary and will likely reveal additional lessons to be learned to improve the outcomes of these projects.

Additional online resources for designing, implementing, and evaluating citizen science projects or for locating programs looking for volunteers in your area are available at the following websites:


Au, J., P. Bagchi, B. Chen, R. Martinez, S. A. Dudley, and G. J. Sorger. 2000. Methodology for public monitoring of total coliforms, Escherichia coli and toxicity in waterways by Canadian high school students. Journal of Environmental Management 58 (3):213–230.

Braschler, B. 2009. Successfully implementing a citizen-scientist approach to insect monitoring in a resource-poor country. BioScience 59 (2):103–104.

Brossard, D., B. Lewenstein, and R. Bonney. 2005. Scientific knowledge and attitude change: The impact of a citizen science project. International Journal of Science Education 27 (9):1099–1121.

Canfield, D. E., C. D. Brown, R. W. Bachmann, and M. V. Hoyer. 2002. Volunteer lake monitoring: Testing the reliability of data collected by the Florida LAKEWATCH program. Lake and Reservoir Management,18(1): 1–9.

Cohn, J. P. 2008. Citizen science: Can volunteers do real research? BioScience 58 (3):192–197.

Couvet, D., F. Jiguet, R. Julliard, H. Levrel, and A. Teyssedre. 2008. Enhancing citizen contributions to biodiversity science and public policy. Interdisciplinary Science Reviews 33 (1):95–103.

Danielsen, F., M. Skutsch, N. D. Burgess, P. M. Jensen, H. Andrianandrasana, B. Karky, R. Lewis, J. C. Lovett, J. Massao, Y. Ngaga, P. Phartiyal, M. K. Poulsen, S. P. Singh, S. Solis, M. Sorensen, A. Tewari, R. Young, and E. Zahabu. 2011. At the heart of REDD+: a role for local people in monitoring forests? Conservation Letters 00 (1–10).

Delaney, D. G., C. D. Sperling, C. S. Adams, and B. Leung. 2008. Marine invasive species: validation of citizen science and implications for national monitoring networks. Biological Invasions 10:117–128. Field Code Changed

Evans, C., E. Abrams, R. Reitsma, K. Roux, L. Salmonsen, and P. P. Marra. 2005. The Neighborhood Nestwatch Program: participant outcomes of a citizen-science ecological research project. Conservation Biology 19 (3):589–594.

Fore, L. S., K. Paulsen, and K. O'Laughlin. 2001. Assessing the performance of volunteers in monitoring streams. Freshwater Biology 46:109–123.

Galloway, A. W. E., M. T. Tudor, and W. M. Vander Haegen. 2006. The reliability of Citizen Science: A case study of Oregon White Oak stand surveys. Wildlife Society Bulletin 34 (5):1425–1429.

Gorman, J. 2001. Counting birds at the grassroots: Making a census into "Citizen Science," naturalists share their findings online. New York Times, Dec 13 2001, G1, G8.

Haag, A. 2005. A trip of a lifetime. Nature 435:1018–1020.

Lepage, D., and C. M. Francis. 2002. Do feeder counts reliably indicate bird population changes? 21 years of winter bird counts in Ontario, Canada. The Condor 104 (2):255–270.

Nerbonne, J. F., and K. C. Nelson. 2008. Volunteer macroinvertebrate monitoring: Tensions among group goals, data quality, and outcomes. Environmental Management 42:470–479.

Newman, G., A. Crall, M. Laituri, J. Graham, T. Stohlgren, J. C. Moore, K. Kodrich, and K. A. Holfelder. 2010. Teaching citizen science skills online: Implications for invasive species training programs. Applied Environmental Education and Communication 9:276–286.

Phillips, A. L. 2008. Of sunflowers and citizens. American Scientist 96 (5):375.

Sliverton, J. 2009. A new dawn for citizen science. Trends in Ecology and Evolution 24 (9):467–471.

Trumbull, D. J., R. Bonney, D. Bascom, and A. Cabral. 2000. Thinking scientifically during participation in a citizen-science project. Science Education 84:265–275.



This document is FOR291, one of a series of the School of Forest Resources and Conservation Department, UF/IFAS Extension. Original publication date May 2012. Reviewed August 2017. Visit the EDIS website at


Luke Gommerman, graduate student, Soil and Water Sciences Department; and Martha C. Monroe, professor, School of Forest Resources and Conservation; UF/IFAS Extension, Gainesville, FL 32611.

The Institute of Food and Agricultural Sciences (IFAS) is an Equal Opportunity Institution authorized to provide research, educational information and other services only to individuals and institutions that function with non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, marital status, national origin, political opinions or affiliations. For more information on obtaining other UF/IFAS Extension publications, contact your county's UF/IFAS Extension office.

U.S. Department of Agriculture, UF/IFAS Extension Service, University of Florida, IFAS, Florida A & M University Cooperative Extension Program, and Boards of County Commissioners Cooperating. Nick T. Place, dean for UF/IFAS Extension.