Program Evaluation Techniques

Many program evaluators conduct interviews to collect information.
Many program evaluators conduct interviews to collect information. (Image: female interviewer image by Peter Baxter from

Program evaluation employs scientific research methods used by social scientists and public policy analysts to study the implementation and impacts of social programs in areas ranging from education and health care to economic development. Evaluation has its roots in the idea that public programs should have demonstrable benefits. Evaluation research began in the early 20th century, emerging as a distinct specialty in the 1970s. Since then, techniques for evaluating programs have increased in number and complexity, ranging from simple narratives to complicated statistical analyses. Often, the nature of the evaluation itself and the research questions it asks help an evaluator determine the most appropriate techniques.

Case Studies

A case study--one of the most basic evaluation methods--uses in-depth narrative to describe the experiences of program clients and those of the personnel who provide services. Usually qualitative in nature, case studies provide rich details about a program's operations and understand client feelings about a program's services and the benefits they receive as participants. However, the Free Management Library website, in a guide to program evaluation, points out that case studies are often time-consuming to conduct. In addition, case studies are often less comprehensive, emphasizing depth of detail rather than the breadth of a program.

Interviews and Surveys

When used together, interviews and surveys provide narrative details as well as quantitative data that can be analyzed with statistical software. Surveys containing questions with an answer scale yield data that can be easily coded for analysis, while interviews enable the evaluator to expand on survey responses by collecting more detailed information. However, evaluation researchers should exercise care with these techniques, as the wording of questions can bias respondents' answers. In addition, interview data can be costly and time-consuming to analyze, much like case studies.

Focus Groups

A focus group is a structure interview of multiple individuals. Originally developed by sociologists, focus group interviews are popular in marketing research. Program evaluators use focus groups to gain insight into education, health and other social programs from multiple perspectives, such as those of service providers and program clients. Focus groups can provide richly detailed information because of their interactive nature. For example, comments from one respondent may generate interesting responses and ideas from others. This gives focus groups an advantage over interviews and surveys. However, the data can be difficult to summarize and present in a concise manner.

Quantitative Analyses

This approach to program evaluation enables a researcher to investigate a variety of empirical questions regarding program operations and outcomes, using statistical techniques ranging from descriptive measures, such as means and standard deviations, to comparative studies using differences of means, analysis of variance or even regression techniques. These types of program evaluations are most appropriate for programs with systematic data collection methods. For example, a school-based health care program that collects data on students served can build an extensive database on types of services provided and students served, enabling an evaluator to conduct in-depth quantitative studies, even comparing students served by the program to other students in the same school system. Evaluators must analyze the data carefully, be aware of the advantages and drawbacks of various statistical techniques and present results in such a way that program personnel can understand, regardless of their statistical knowledge.


This program evaluation technique employs multiple research methods to uncover details about the design, implementation and outcomes of a social program. Combining interviews and surveys with quantitative data collected by a public school system is one example of triangulation, using multiple research methods to study the same program. Peter Rossi, Howard Freeman and Mark Lipsey, authors of "Evaluation: A Systematic Approach," write that triangulation can strengthen the validity of findings, with one research technique compensating for the weaknesses and drawbacks of other methods. Triangulation of methods often results in a more comprehensive, detailed evaluation study.

Related Searches


Promoted By Zergnet


You May Also Like

Related Searches

Check It Out

Are You Really Getting A Deal From Discount Stores?

Is DIY in your DNA? Become part of our maker community.
Submit Your Work!