Successful teaching depends on the quality of materials, how they are presented and used among teachers and students, and how the material is assessed.
Nationally, there has been considerable recent debate and discussion about how to ascertain the effectiveness of any given school curriculum. Under the auspices of the National Research Council, a committee chaired by Jere Confrey, Ph.D., professor of education in Arts & Sciences, has written the report “On Evaluating Curriculum Effectiveness.”
The report reviewed nearly 700 submissions of “evaluation documents,” culled those down to 192 primary sources and sought to determine if the quality of the studies could be used to establish the effectiveness of the programs.
“This discussion of how to establish curricular effectiveness in mathematics is particularly relevant in light of President Bush’s recent State of the Union address and proposed budget, which includes significant expenditures on improving mathematics and science education in this country,” Confrey said. “If the funds are to make a difference, thorough, valid and fair evaluations of materials will be critical.”
Confrey presented “Evaluation Framework and Comparative Analysis” Feb. 17 at the annual meeting of the American Association for the Advancement of Science in St. Louis.
“Curriculum evaluation is needed to examine the quality, correctness and comprehensiveness of the materials before instruction, the ways they work when implemented, and their comparative effects on learning for particular topics and for particular students or identified subgroups,” Confrey said. “We often overlook how to systematically engage in improvement in instruction.”
That evaluation is also important at the university level, she said.
“Even here at Washington University, we spend many hours over many years teaching our classes, and yet small changes can often make a big difference in students’ experiences in relation to how particular misconceptions are addressed, in how technologies or lab equipment are used and made available, or in what projects or papers we assign and how we grade them.”
The committee concluded that due to methodological weaknesses and inadequate numbers in the evaluation studies, no determination of the effectiveness of 19 selected curricula could be made.
However, the committee made specific recommendations on how such evaluations should be conducted to provide proper information about curriculum effectiveness to decision-makers.
In contrast to the perspective argued for by the Institute for Educational Sciences, the committee suggested not only comparative study using experimental or quasi-experimental methods, but that one also needs a content analysis by experts in mathematics and mathematics education and case studies to shed light on what is working and how it works.
“Curriculum evaluation may happen in the class, the dorms or the library,” Confrey said. “It’s important to understand more about what works, for whom, under what conditions and resources and over what time period.
“Our treatment of effectiveness does not imply that one size fits all — we supported innovation — but it does require one to gather and carefully examine the evidence of impact.”