The leading cause of bad policy decisions is the phrase “Research clearly shows … ” which suggests that all of us should accept one interpretation of some unnamed set of ‘research’ (most of which is not research at all). Understanding the needs of students not prepared for college mathematics is a long-term process, involving prolonged conversations among professionals as we attempt to understand what the data and the research say about our work and our students.
My goal is to present another scientific research article on the impacts of developmental education — remedial mathematics in particular. This article is by Bettinger & Long called “Addressing The Needs Of Under-Prepared Students In Higher Education: Does College Remediation Work?” which you can download at http://www.nber.org/papers/w11325.pdf
This research is based on a large sample of students in Ohio. The strategy is to adjust for selection bias that is so strong in all studies on remediation — Students referred to remediation tend to have both lower specific skills (math) and more academic challenges. The authors define a series of variables for this purpose, and eventually calculate a ‘local area treatment effect’ (LATE) which is partially based on the fact that cutoffs for remediation vary significantly among the 45 institutions of higher education in the data. The analysis of “LATE” involved a restriction on the sample — towards the middle, where the cutoffs have more impact; this analysis excludes the weakest (roughly 10%) of the overall sample.
Key Finding #1: Equal Outcomes for those in Remediation
For outcomes such as dropping out and degree completion (bachelor’s) students who had remediation achieved similar outcomes to those who did not, once the selection bias was accounted for.
Key Finding #2: For those most impacted by remediation cutoffs, outcomes are improved
The “LATE” analysis showed that remedial students had a lower rate of dropping out and a higher rate of degree completion compared to similar students without remediation. The authors attribute this as an accurate (perhaps even conservative) estimate of the benefits of remediation.
Here is a nice quote from their summary:
We estimate that students in remediation have better educational outcomes in comparison to students with similar backgrounds and preparation who were not required to take the courses. [pg 19]
The research also explored the impact of remediation on student interest (as measured by type of major); you might find that discussion interesting, though it is not directly related to the question of ‘thrive’ in remedial math. I say that because the initial major data was taken from the survey attached to the ACT exam — usually completed long before a student examines their actual choices at the college they enroll at. The authors do find an interaction between remediation and changing type of majors (specifically, changing out of math-related majors).
This study, as the others I’ve listed lately, provide a different picture of developmental mathematics than we hear in the loud conversations by policy makers (Complete College America, for example) and proponents of ‘co-requisite remediation’. Those external forces almost always refer to ‘research’ that is simple (few variables) and aggregated; they have not dealt with the selection bias problem at all. If you read the pronouncements carefully, you’ll notice that the biggest evidence of our failure in remedial mathematics is the large group of students who never attempt their remedial math course(s); this ‘damning conclusion’ is presented without any evidence that the nature of the remedial math courses had any causative connection to that lack of attempt.
As professionals, it is our job to both learn about the valid research on our work (the good and not-so-good) and to inform others about what this research says.
Evidence exists which truly does indicate that remedial mathematics is where dreams go to thrive.
Join Dev Math Revival on Facebook: