Category: placement

The Majority of Community Colleges Use Multiple Measures, or “Using Statistics to Say Nothing Impressively”

My college has a team working on implementing multiple measures placement for our credit math courses.  We are early in a process, so we are primarily collecting information.  One of my colleagues found an organization with both internal resources and links to external resources.  One of those external resources led me to a “CAPR” report (more on the acronym in a moment) with a good example of bad statistics.

So, here’s the question:

What proportion of American community colleges (defined as associate degree granting institutions) use multiple measures to place students in mathematics?

A place with an ‘answer’ to this good question is the Center for the Analysis of Postsecondary Readiness (CAPR), with a report “Early Findings from a National
Survey of Developmental Education Practices” (see https://postsecondaryreadiness.org/wp-content/uploads/2018/02/early-findings-national-survey-developmental-education.pdf ).  Using data from two national surveys, this report shows the graph below:

 

 

 

 

 

 

 

 

 

 

 

 

So, what is the probability of a given community college using multiple measures placement?  It’s not 57%, that’s for sure.  In general, this work is being done by states.  If your community college is in California, the probability of using multiple measures is pretty much 100%.  On the other hand, if your community college is in Michigan, the probability is somewhere around 5% to 10%.  Is the probability rising over time?

Here is what the report says to ‘interpret’ the graph:

This argument [other indicators of college readiness provide a more accurate measure ofcollege success] has gained traction in recent years among public two-year institutions.  In a 2011 survey, all public two-year institutions reported using a standardized mathematics test to place students into college-level math courses; as shown in Figure 1, only 27 percent reported using at least one other criterion, such as high school grade point average or other high school outcomes. Just five years later, 57 percent of public two-year institutions reported using multiple measures for math placement.

Clearly, the implication is that community colleges are choosing to join this ‘movement’.  Of course, some community colleges are making that choice (as mine is doing).  However, a large portion of that 57% in 2016 reflects states with mandated multiple measures (California, North Carolina, Georgia, Florida, probably others).  The data has no particular meaning in any location or college in the country.  A movement could be measured in states without a mandate.

Essentially, the authors are using statistics to say absolutely nothing in an impressive manner.  Multiple measures is clearly a ‘good thing’ because more colleges are doing it, so the logic goes.  Unfortunately, the data does not mean anything like that — multiple measures are most commonly imposed by non-experts who have the authority to mandate a policy.  [By ‘non-expert’, I mean people whose profession does not involve the actual work of getting students correctly placed in mathematics … politicians, chancellors, etc.]

 Join Dev Math Revival on Facebook:

Cooked Carrots and College Algebra

Perhaps your state or college is using high school grade point average (HS GPA) as a key placement tool in mathematics, in the style of North Carolina.  The rationale for this approach is studies showing a higher correlation between HS GPA and success in college mathematics, compared to standardized tests (SAT, Accuplacer, etc).  Is this a reasonable methodology?

Some of us are doing true multiple measures, where HS GPA is included along with other data (such as test scores).  However, North Carolina is using HS GPA as the primary determinant of college placement; see http://www.nccommunitycolleges.edu/sites/default/files/academic-programs/crpm/attachments/section26_16aug16_multiple_measures_of_placement.pdf .

This HS GPA movement reminds me of a specific class day in one of my classes — a graduate level research methods class.  On this day, the professor presented this scenario:

Data shows that students who liked cooked carrots are much more likely to succeed in college.  Should a preference for cooked carrots be included as a factor in college admissions?

The goal, of course, was to consider two basic statistical ideas.  First, that correlation does not equal explanation.  Second, most correlations have a number of confounding variables.  In the case of cooked carrots, the obvious confounding variable is money — families eating cooked carrots, as a rule, have more money than those who don’t.  Money (aka ‘social economic status’, or SES) is a confounding variable in much of our work.  We could even conjecture that liking cooked carrots is associated with a stable family structure as well as non-impoverished neighborhoods, which means that there will be a tendency for cooked-carrot-liking students to have attended better schools.  Of course, this whole scenario is bound up in the cultural context of that era (the 1970s in the USA).

In a similar way, proponents point out the high correlation between HS GPA and success in college mathematics.  That correlation (often 0.4 or 0.5) is higher than our test score correlations (often 0.2 or 0.3), which is often ‘proof enough’ for academic leaders who do not apply statistical reasoning to the problem.  Here is the issue:

If I am going to use a measure to sort students, I better have a sound rationale for this sorting.

That rationale is unlikely to ever exist for HS GPA … no explanation is provided beyond the statistical artifact of ‘correlation’.  Student A comes from a high-performing school and has a 2.5 GPA; do they need remediation?  Student B comes from a struggling school and has a 3.2 GPA; are they college ready?  Within a given school, which groups of students are likely to have low GPA numbers?  (Hint: HS GPA is not race-neutral.)

If you are curious, there is an interesting bit of research on HS GPA issues done by Educational Testing Service (ETS) in 2009; see https://www.ets.org/Media/Research/pdf/RR-13-09.pdf .  One of the findings:  HS GPA is “contaminated” by SES at the student level (pg 14).   Just like cooked carrots.

So, if you are okay with ‘cooked carrots’ being a sorting tool for college algebra, go ahead with HS GPA as a placement tool.

Join Dev Math Revival on Facebook:

The Placement Test Disaster ?

For an internal project at my institution, I’ve been looking at the relationships between Accuplacer test scores, ACT Math scores, and performance in both developmental and college-level courses.  Most of the results are intended for my colleagues here at LCC.  However, some patterns in those relationships are important for us to explore together.

So, the first pattern that is troubling is this:

Students who place into a pre-calculus course based on their ACT Math score have lower outcomes than those who place based on the Accuplacer “College Level Math” test … and lower than those who needed to take intermediate algebra before pre-calculus.

We use the ‘college readiness’ standard on the ACT Math test of 22 (see https://www.act.org/content/act/en/education-and-career-planning/college-and-career-readiness-standards/benchmarks.html ).  The pattern in our data for the ACT Math is similar to some references found at other institutions … though we tend not to talk about this.

Of course, the use of an admissions test (ACT or SAT) for course placement is “off label” — the admissions tests were not designed for this purpose.  We tend to use the ACT option for placement in response to political pressure from administrators (internally) and from stakeholders (externally), and sometimes under the guise of “multiple measures”.  The patterns in our data suggest that the ACT Math score is only valid for placement when used in a true multiple measures system … where two or more data sources are combined to create a placement.  However, most of us operate under ‘alternative measures’, where there are different options … and a student can select the highest outcome if they wish; alternative measures is guaranteed to maximize the error rate in placement, with a single measure placement test almost always providing better results.

The second pattern reflecting areas of concern:

The correlations are low between (A) the ACT Math and Accuplacer College Level Math test, and (B) the Accuplacer Algebra and Accuplacer Arithmetic tests.

The second combination is understandable, in itself; the content of the Algebra and Arithmetic tests have low levels of overlap.  The problem deals with our mythology around a sequence of math courses … that the prerequisite to algebra is ‘passing’ basic math.  Decades of our research on algebra success provide strong evidence that there is little connection between measures of arithmetic mastery and passing a first algebra course.  In spite of this, we continue to test students on arithmetic when there curricular needs are algebraic:  that is a disaster, and a tragedy.

The first ‘low correlation’ (ACT Math, College Level Math) is not what we would expect.  The content domains for the two tests have considerable overlap, and both tests measure aspects of ‘college readiness’.  As an interesting ‘tidbit’, we find that a higher proportion of minorities (African American in particular) place into pre-calculus based on the more reliable College Level Math test compared to majority (white, who have a higher proportion placed based on the ACT Math) — creating a bit of a role reversal (whites placed at a disadvantage).

Placement testing can add considerable value … and placement testing can create extreme problems.  For example, students with an average high school background will frequently earn a ‘college ready’ ACT Math score when they have too many gaps in their preparation for pre-calculus.  A larger problem (in terms of number of students) comes from the group of students a bit ‘below average’ … who tend to do okay on a basic algebra test but not-so-good on arithmetic, which results in thousands of students taking an arithmetic-based course when they could have succeeded in a first algebra course (or Math Literacy).

Those two problems are symptoms of a non-multiple-measures use of multiple measures, where alternative measures allow students to select the ‘maximum placement’ while other measures (with higher reliability) suggest a placement better matched for a success situation.

As a profession, we are under considerable pressure to avoid the use of placement tests.  Policy makers have been attacking remediation for several years now, and more reasonable advocates suggest using other measures.  The professional response is to insist on the best outcomes for students — which is true multiple measures; if that is not viable, a single-measure placement test is better than either a college-admission test or a global measure of high school (like HS GPA).

And, all of us should deal with this challenge:

Why would we require any college student to take a placement test on Arithmetic, when their college program does not specifically require proficiency in the content of that type of test?

At my institution, I don’t think that there are any programs (degrees or certificates) that require basic arithmetic.  We used to have several … back in 1980!  Technology in the workplace has shifted the quantitative needs, while our curriculum and placement have tended to remain fixated on an obsolete view.

 Join Dev Math Revival on Facebook:

Are Those Tests Evil? (ACT, SAT)

So, I have been doing some work on my college’s data relative to passing a math class correlated with either a placement test score or a score on the ACT Math section.  I shared some correlation data in an earlier post … this post is more about validity of measures.

One issue that has been impacting 4-year colleges & universities is the ‘test optional’ movement, where institutions make admissions decisions based on factors other than standardized tests.  This is an area of some research; one example is at http://www.act.org/content/dam/act/unsecured/documents/MS487_More-Information-More-Informed-Decisions_Web.pdf if you are interested.  Since I work at a community college, all of our admissions decisions are ‘test optional’.

Michigan uses standardized tests (ACT or SAT) as part of the required testing for students who complete high school, and the vast majority of our students do complete high school in Michigan.  Curiously, less than half of the students have standardized tests on their college record.  This both creates some interesting statistical questions and some practical problems.

For the past several years, the ACT has been that test for Michigan high school students (a switch was made to the SAT this year).  We use the ACT standard for ‘college readiness’, which is a 22 on the ACT Math section.  That standard was determined by the ACT researchers, using a criteria of “75% probability of passing college algebra” based on a very large sample of data.

A problem with this standard is that “college algebra” has several meanings in higher education.  For some people, college algebra is synonymous with a pre-calculus course; for others, college algebra is a separate course from pre-calculus.

My institution actually offers both flavors of college algebra; we have a “Pre-Calculus I” course as well as a “College Algebra” course.  The College Algebra course does not lead to standard calculus courses, but does prepare students for both applied calculus and a statistics course.  The Pre-Calculus I course is a very standard first semester course, and has a lower pass rate than College Algebra.  The prerequisite to both courses is one of (a) ACT Math 22 (b) Accuplacer College Level Math (CLM) test 55, or (c) passing our intermediate algebra course; all three of these provide the student with a “Math Level 6” indicator.  We assign a higher math level for scores significantly above the thresholds listed here.

So, here is what we see in one year’s data for the Pre-Calculus course:

  • ACT Math 22 to 25                      63% pass pre-calculus I
  • CLM  55 to 79                               81% pass pre-calculus I
  • passed Intermediate Algebra    71% pass pre-calculus I

The first two proportions are significantly different, and the first proportion is significantly different from the ‘75%’ threshold used by ACT.  One conclusion is that the ACT College Readiness standard is based more on other “college algebra” courses (not as much pre-calculus).

One of the things we find is that there is little correlation between the ACT and passing Pre-Calculus.  In other words, students with a 25 ACT Math are not any more likely to pass than those with a 22.  This is not quite as true with the CLM; the probability of passing increases (though slightly) with scores as they increase from the cutoff.

Now, a question is “why did so many students NOT provide the College with their ACT scores”?  Well, perhaps the better question … “Are those who did not provide the scores significantly different from those who did provide them?”  That is a workable question, though the data is not easy to come by.  The concern is that some types of students are more likely to provide the ACT scores (either white students or students from more affluent schools).

We’ve got reason to have doubts about using the ACT Math score as part of a placement cutoff, and reason to prefer the CLM for predictive power.

More of us need to analyze this type of data and share the results; very little research is available on validity issues of standardized tests done by practitioners.

 
Join Dev Math Revival on Facebook:

 

 

 

WordPress Themes