Project Follow Through - Critiques

Critiques

Wisler et al. (1978), in their review of the Follow Through experience, wrote that it was likely that no other educational data has been examined more extensively, excepting the landmark Equality of Educational Opportunity Survey (p. 177). At least three major reevaluations of the Follow Through data exist in the Follow Through literature: House, et al. (1978); Bereiter & Kurland (1981); and Kennedy (1981). All largely confirm the original statistical analysis conducted by Abt Associates. Generally, the consensus among most researchers is that structured models tended to perform better than unstructured ones (Evans, 1981, pp. 13–14), and that the Direct Instruction and Behavior Analysis models performed better on the instruments employed than did the other models (Rhine, 1981, p. 302, Wisler, et al., 1978, p. 180, Adams & Engelmann, 1996, p. 72). Most critiques of the Follow Through experiment have tended to focus on the operational and design problems that plagued the experiment (e.g., Elmore, 1977). In particular, these critiques note that there was more variation within a particular model than there was from model to model. This problem has largely been attributed to the problem of how to measure the effectiveness of a particular implementation; the measures used were largely qualitative and anecdotal (Stebbins, et al., 1977). In some instances, sites were included in the analysis that had ceased to implement specific models, or the model sponsors had serious reservations about the way particular models were implemented (Engelmann, 1992; Adams & Engelmann, 1996).

The most vocal critique was the House, et al. (1978) reanalysis. The article—along with several rebuttals from the original evaluation team and other researchers—was published by the Harvard Educational Review in 1978. The authors were extremely dissatisfied with the pronouncement of the evaluators that the basic skills models outperformed the other models. The authors approach the critique on the assumption that basic skills are decidedly just that—basic. The authors imply that basic skills are only taught through “rote methods”—a decidedly negative connotation (p. 137).

Regarding the finding that “models that emphasize basic skills produced better results on tests of self-concept than did other models” (Stebbins, et al., 1977, p. xxvi), the authors question the efficacy of the self-esteem measures; implying, among other things, that young students cannot possibly have a concrete understanding of self-concept (pp. 138–139). While the article intended to review the operational design of the Follow Through evaluation, instead it appears to (1) refute the finding that cognitive-conceptual and affective-cognitive models were largely failures, and (2) unilaterally condemn the models that emphasize basic skills. The implication is that the goal of education should not be increased student achievement in solely basic skills, and that Follow Through would have been better employed to discover how measures of all three orientations could be made successful. Absent from the critique is the finding that, for third graders, only the Direct Instruction model demonstrated positive effects in all three domains, and that one of the remaining two models (Behavior Analysis; the other was the Parent Education model) that had positive effects in at least two domains was also a self-described “basic skills model” (Adams & Engelmann, 1996, p. 72).

Read more about this topic:  Project Follow Through