Project Follow Through - Operational and Design Issues

Operational and Design Issues

Lack of systematic selection of interventions and lack of specificity of treatment effects. Due to a variety of circumstances detailed earlier, the Follow Through programs were not systematically developed or selected according to any type of uniform criteria (Evans, 1981, pp. 6, 15). Given more time, sponsors may have been able to better identify the types of treatment effects that an observer might expect to occur under controlled conditions. More importantly, program sponsors might also have been required to show those specific facets of their interventions (e.g., particular pedagogical techniques) which would produce the intended effects. Despite these flaws, the sponsors agreed on being subject to the same evaluation instruments. Unfortunately, the instruments shed little light on what about the ineffective programs made them so unsuccessful. The converse is also true. Since structured programs tended to show better effects than the unstructured ones, efforts could certainly have been made to identify commonalities among the effective structured programs. With further funding, these shared characteristics could have informed the development of additional effective programs or made the ineffective approaches better. Unfortunately, funding was in fact reduced for those programs that were identified as successful in Follow Through, perhaps on the presumption that funding would be better diverted to investigating failed programs (Watkins, 1997). Programs that had no empirical validation at all were recommended for dissemination along with the successful models.

Lack of random assignment. Random assignment of subjects into treatment and control groups is the ideal method of attributing change in a sample to an intervention and not to some other effect (including the pre-existing capabilities of students, teachers, or school systems) (Evans, 1981, p. 15). However, for a variety of practical reasons, this procedure was not done in Follow Through (Stebbins, et al., 1977, p. 11). Instead, sites were selected “opportunistically” (Watkins, 1997, p. 19), on their readiness to participate in the evaluation, and on their unique circumstances of need. As Stebbins, et al. (1977), points out, the treatment groups were often the neediest children. To randomly select some of the most disadvantaged children (many of whom participated in Head Start prior to Follow Through) out of the evaluation would certainly have been negatively perceived by community members (p. 61). Stebbins, et al. (1977) point out that there were “considerable variations in the range of children served”; yet despite the presence of “many of the problems inherent in field social research…evaluations of these planned variations provides us with an opportunity to examine the educational strategies under real life conditions as opposed to contrived and tightly controlled laboratory conditions” (pp. 12–13).

Narrowness of instruments. Adams and Engelmann (1996, p. 71) note that many critics have suggested the use of more instruments in the Follow Through evaluation. Egbert (1981, p. 7) agrees with Adams and Engelmann (1996) that the data collection efforts were extensive. Despite the agreement among model sponsors on a uniform set of instruments to evaluate the effectiveness of their models—that model sponsors believed their programs achieved gains on more intrinsic, less measurable indicators of performance, such as increased self-worth or greater parental involvement. To the extent that these desired outcomes occurred, and benefited the lives of students in ways that might never be measurably through quantitative means, those aspects of many models were successful. Both the House, et al. critique (1978) and others (cited in Wisler, et al., 1978) express concerns about the inadequacy of the instruments used to measure self-esteem the Follow Through evaluation (i.e., the Intellectual Achievement Responsibility Scale (IARS) and the Coopersmith Self-Esteem Inventory). But it was better, according to many researchers, to measure outcomes imperfectly rather than not to measure them at all (Wisler, et al., 1978, p. 173). Thus, while “perfect” measures of desired outcomes might never exist, one should not let the perfect be the enemy of the good—in other words, one could call into question the efficacy of conducting any experiment at all on the basis that some bias or imperfection exists.

Read more about this topic:  Project Follow Through

Famous quotes containing the words design and/or issues:

    I begin with a design for a hearse.
    For Christ’s sake not black—
    nor white either—and not polished!
    Let it be weathered—like a farm wagon—
    William Carlos Williams (1883–1963)

    To make life more bearable and pleasant for everybody, choose the issues that are significant enough to fight over, and ignore or use distraction for those you can let slide that day. Picking your battles will eliminate a number of conflicts, and yet will still leave you feeling in control.
    Lawrence Balter (20th century)