Impact Evaluation - Biases in Estimating Programme Effects

Biases in Estimating Programme Effects

Randomized field experiments are the strongest research designs for assessing program impact. This particular research design is said to generally be the design of choice when it is feasible as it allows for a fair and accurate estimate of the program’s actual effects (Rossi, Lipsey & Freeman, 2004). However with that said, randomized field experiments are not always feasible to carry out and in these situations there are alternative research designs that are at the disposal of an evaluator. The main problem though is that regardless of which design an evaluator chooses, they are prone to a common problem; regardless of how well thought through or well implemented the design is, each design is subject to yielding biased estimates of the program effects. These biases play the role of either exaggerating or diminishing program effects. Not only that, but the direction the bias may take cannot usually be known in advance (Rossi et al., 2004). These biases affect the interest of the stakeholder. Furthermore it is possible that program participants are disadvantaged if the bias is in such a way that it contributes to making an ineffective or harmful program seem effective. There is also the possibility that a bias can make an effective program seem ineffective or even as far as harmful. This could possibly make the accomplishments of program seem small or even insignificant therefore forcing the personnel and even cause the program’s sponsors to reduce or eliminate the funding for the program (Rossi et al., 2004). So it is safe to say that if an inadequate design yields bias, the stakeholders who are largely responsible for the funding of the program will be the ones most concerned as the results of the evaluation will assist the stakeholders in making the decision of whether or not they choose to continue funding the program because at the end of the day the final decision does lie with the funders and the sponsors. Not only are the stakeholders mostly concerned but those taking part in the program or those the program is intended to positively affect will be affected by the design chosen and the outcome rendered by that chosen design. Therefore the evaluator’s concern is to minimize the amount of bias in the estimation of program effects (Rossi et al., 2004).

Biases are normally visible in two situations; when the measurement of the outcome with program exposure or the estimate of what the outcome would have been without the program exposure is higher or lower than the corresponding “true” value (p267). Unfortunately however, not all forms of bias that may compromise impact assessment are obvious (Rossi et al., 2004). The most common form of impact assessment design is comparing two groups of individuals or other units, an intervention group that receives the program and a control group that does not. The estimate of program effect is then based on the difference between the groups on a suitable outcome measure (Rossi et al., 2004). The random assignment of individuals to program and control groups allows for making the assumption of continuing equivalence. Group comparisons that have not been formed through randomization are known as non-equivalent comparison designs (Rossi et al., 2004).

  • Selection bias

When there is an absence of the assumption of equivalence, the difference in outcome between the groups that would have occurred regardless creates a form of bias in the estimate of program effects. This bias is known as selection bias (Rossi et al., 2004). This particular bias creates a threat to the validity of the program effect estimate in any impact assessment using a non-equivalent group comparison design and appears in situations where some process responsible for influences that are not fully known selects which individuals will be in which group instead of the assignment to groups being determined by pure chance (Rossi et al., 2004). Selection bias can also occur through natural or deliberate processes that causing a loss of outcome data for members of intervention and control groups that have already been formed, this is known as attrition and it can come about in two ways (Rossi et al., 2004). These ways are namely (1) targets drop out of the intervention or control group cannot be reached or (2) targets refuse to co-operate in outcome measurement. Differential attrition is assumed when attrition occurs as a result of something either than explicit chance process (Rossi et al., 2004). This means that “those individuals that were from the intervention group whose outcome data are missing cannot be assumed to have the same outcome-relevant characteristics as those from the control group whose outcome data are missing” (Rossi et al., 2004, p271). However random assignment designs are not safe from selection bias which is induced by attrition (Rossi et al., 2004).

  • Other forms of bias

There are other factors that can be responsible for bias in the results of an impact assessment. These generally have to do with events or experiences other than receiving the program that occur during the period of the intervention. These biases include; secular trends, interfering events and maturation (Rossi et al., 2004).

  • Secular trends or Secular drift

Secular trends can be defined as being relatively long-term trends in the community, region or country. These are also termed secular drift and may produce changes that enhance or mask the apparent effects of a (Rossi et al., 2004). For example, in a period when a community’s birth rate is declining, a program to reduce fertility may appear effective because of bias stemming from that downward trend (Rossi et al., 2004, p273).

  • Interfering events

Interfering events are similar to secular trends but in this case it is the short-term events that can produce changes that may introduce bias into estimates of program effect, such as a power outage disrupting communications and hampers the delivery of food supplements may interfere with a nutritional program (Rossi et al., 2004, p273).

  • Maturation

Impact evaluation needs to be able to cope with the fact that natural maturational and developmental processes can produce considerable change independently of the program. Including these changes in the estimates of program effects would result in bias estimates. An example of this form of bias would be a program to improve preventative health practices among adults may seem ineffective because health generally declines with age (Rossi et al., 2004, p273).

“Careful maintenance of comparable circumstances for program and control groups between random assignment and outcome measurement should prevent bias from the influence of other differential experiences or events on the groups. If either of these conditions is absent from the design, there is potential for bias in the estimates of program effect” (Rossi et al., 2004, p274).

Read more about this topic:  Impact Evaluation

Famous quotes containing the words biases, estimating, programme and/or effects:

    A critic is a bundle of biases held loosely together by a sense of taste.
    Whitney Balliet (b. 1926)

    I am sure that in estimating every man’s value either in private or public life, a pure integrity is the quality we take first into calculation, and that learning and talents are only the second.
    Thomas Jefferson (1743–1826)

    The idealist’s programme of political or economic reform may be impracticable, absurd, demonstrably ridiculous; but it can never be successfully opposed merely by pointing out that this is the case. A negative opposition cannot be wholly effectual: there must be a competing idealism; something must be offered that is not only less objectionable but more desirable.
    Charles Horton Cooley (1864–1929)

    Perspective, as its inventor remarked, is a beautiful thing. What horrors of damp huts, where human beings languish, may not become picturesque through aerial distance! What hymning of cancerous vices may we not languish over as sublimest art in the safe remoteness of a strange language and artificial phrase! Yet we keep a repugnance to rheumatism and other painful effects when presented in our personal experience.
    George Eliot [Mary Ann (or Marian)