Placement Tests – Valid?

Almost all community colleges use a placement test to identify students who need a developmental course.  Are these tests sufficiently valid for this high-stakes usage?

A recent publication from the Community College Research Center (CCRC) at Columbia College reports on a research study to examine this validity; the report is”Predicting Success in College: The Importance of Placement Tests and High School Transcripts” (CCRC Working Paper No. 42) and is available at  http://ccrc.tc.columbia.edu/DefaultFiles/SendFileToPublic.asp?ft=pdf&FilePath=c:\Websites\ccrc_tc_columbia_edu_documents\332_1030.pdf&fid=332_1030&aid=47&RID=1030&pf=Publication.asp?UID=1030

I’ve spent a little time looking through this study.  One data bit is creating quite a bit of interest … a statement that the two major tests (Compass, Accuplacer) have ‘severe error rates’ of 15% to 28%.  By severe error, they mean either of these situations:  (1) The placement test directs a student to a developmental course when the prediction is actually that they would pass the college level course or (2) The placement test directs a student to the college level course when the prediction is that they would fail.

The methodology in the study begins with the assumption that the placement test score measures degrees of readiness, not just a ‘yes’ or ‘no’ (binary) result.  Using data from a state-wide community college system, the authors correlate the placement test scores with whether students actually passed the math course (either a developmental course or college level) to create a probability value.  Since the colleges involved did not generally allow students with scores below a cutoff to take the college level course, they extrapolated to estimate the probability below the cutoff; a similar approach was done for a probability of passing the developmental course for scores above the cutoff.  For each placement test, the study includes between 300 and 800 students.

Using these models for the probabilities, the authors then calculate the severe error rate cited above.  The values shown were for mathematics — the ‘english’ rates for severe errors were slightly higher (27% to 33%).

Separate from that severe error rate, the study showed the ‘accuracy rate’ for each test for the courses; these accuracy rates reflect the pass rates for those above the cutoff and the failure rate for those below the cutoff.  These values range from 50% to 60% in math (for receiving a C grade or better).

The research also examined the relationship of high school performance to both this placement question and to general college success, and they conclude that the high school GPA is the single best predictor — even for predicting who needs a developmental course.

Several things occur to me relative to this study.  First of all, any measurement has a standard error; in the case of Accuplacer, this standard error varies with the score — for middle value scores (like 60 to 80), the standard error is about 10.  If a student scores 69 when the cutoff is 76, there is some chance that the score is ‘on the wrong side’ of the cutoff just due to the standard error of the measure.  In my experience, this standard error results in something like 10% of what the authors call ‘severe error’.  The main methodology to minimize this source of error is to have repeated measures — like having students take the placement test twice. 

Another thought … the report does not identify the math courses involved, nor the cutoff used.  Most results are given for “math 1” and “math 2”; the predictability of readiness is not uniformly distributed, and is more difficult when there are different levels of expectation (reasoning, abstraction, problem solving, etc) in two levels of courses.   Since the report does not identify which type of severe error is contributing the most to the rate, it is possible that the cutoff itself is what contributes to the severe error rate that is beyond the standard error

Though I doubt if many of us would, the use of high school GPA as a placement measure seems both awkward and risky.  We would need to replicate this study in other settings — other states and regions — to see if the same pattern exists.   Even if that result is validated, the use of a composite measure of prior learning raises issues of equity and fairness; applying this to individual students may produce varying results by student characteristics (even more than the placement tests).

The other thought is that a hidden benefit in this report is a comparison of the two primary tests (Accuplacer, Compass) for various measures of validity.  For example, the Accuplacer accuracy rates in math were somewhat higher than those for Compass.

Overall, I do not see this study raising basic questions about our use of placement tests. 

 
Join Dev Math Revival on Facebook:

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.

WordPress Themes