Positive Effects of Comprehensive Teacher Induction

Today, Mathematica Policy Research, Inc. released the final report of its IES/U.S Department of Education-funded randomized controlled trial (RCT) of comprehensive teacher induction. It shows a statistically significant and sizeable impact on student achievement in mathematics (0.20 standard deviations) and reading (0.11 standard deviations) of third-year teachers who received two years of robust induction support. That's the equivalent of moving students from the 50th to 54th percentile in reading achievement and from the 50th to 58th percentile in math achievement.

As a basis of comparison, I note that in 2004, Mathematica conducted a RCT of Teach for America (TFA). In that study, it compared the gains in reading and math achievement made by students randomly assigned to TFA teachers or other teachers in the same school. The results showed that, on average, students with TFA teachers raised their mathematics test scores by 0.15 standard deviations (versus 0.20 standard deviations in the induction study), but found no impact on reading test scores (versus 0.11 standard deviations in the induction study).

In another recent Mathematica report (boy, these folks are busy!), the authors note that "The achievement effects of class-size reduction are often used as a benchmark for other educational interventions. After three years of treatment (grades K-2) in classes one-third smaller than typical, average student gains amounted to 0.20 standard deviations in math and 0.23 standard deviations in reading (U.S. Department of Education, 1998)." In that report -- an evaluation of the Knowledge Is Power Program (KIPP), Mathematica researchers found a very powerful impact from KIPP: "For the vast majority of KIPP schools studied, impacts on students’ state assessment scores in mathematics and reading are positive, statistically significant, and educationally substantial.... By year three, half of the KIPP schools in our sample are producing math impacts of 0.48 standard deviations or more, equivalent to the effect of moving a student from the 30th percentile to the 48th percentile on a typical test distribution..... Half of the KIPP schools in our sample show three-year reading effects of 0.28 standard deviations or more."

Is it appropriate to compare effect sizes among RCTs or, for that matter, among research in general? I am told that it is, although certainly considerations such as cost effectiveness and scalability have to enter into the conversation. Implementation issues also must be attended to. With regard to teacher induction, the issue of cost effectiveness was addressed in a 2007 cost-benefit study published in the Education Research Service's Spectrum journal and summarized in this New Teacher Center (NTC) policy brief.

Disclosure: I am employed by the NTC which participated in the induction RCT, and I helped to coordinate NTC's statement on the study.
The NTC is "encouraged" by the study. However, NTC believes that "it does not reflect the even more significant outcomes that can be achieved when districts have the time, capacity and willingness to focus on an in-depth, universal implementation of comprehensive, high-quality induction. It speaks volumes about the quality of induction and mentoring provided and the necessity of new teacher support that student achievement gains were documented despite [design and implementation] limitations to the study."


UPDATE: Read the Education Week story by Stephen Sawchuk here. And the Mathematica press release here.