Category: placement

HS GPA and Math Placement

In the policy world, “multiple measures” is the silver bullet for solving all issues of student placement in college.  Within the work of multiple measures, the HS GPA is presented as the most reliable measure related to student placement.  This conclusion is the result of some good research being used for disruptive purposes, where a core conclusion is generalized to mathematics when the data was directed at language (‘english’) placement.

A central reference in the multiple measures genre is the Scott-Clayton report from the CCRC ( https://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html ).  One of the key findings in that report is that placement tests have more validity in math than in english.  Other results include the fact that placement accuracy could be improved by including the HS GPA … especially in English.  However, the narrative since that time has repeated the unqualified claim — that HS GPA is a better predictor than placement tests.  Repetition of a false claim is a basic strategy in the world of propaganda.

In an earlier post, I shared a graphic on HS GPA vs ACT tests.

 

 

 

 

 

 

 

This data is from a large ACT study, which means that … if the GPA was a good predictor … we would see all ACT score ranges have a high probability of passing (B or better) in a college algebra course.  The fact that the two lower ACT ranges have an almost-zero rate of change contradicts that expectation.

Locally, I have looked at HS GPA versus math testing using SAT Math scores:

 

 

 

 

 

 

 

Although this graph does not look at ‘success’, we have plenty of other data to support the conclusion of Scott-Clayton — math placement tests have better validity than English tests.  [The horizontal reference lines in this graph represent the cutoffs for our math classes.]

One might make the argument that math tests work fine for algebraic-based math courses, and that HS GPA works better for general education math courses.  As it turns out, we have been using a HS GPA cutoff for our quantitative reasoning course (Math119) … which includes some algebra, but is predominantly numeracy.

Results:

  • Students who used their HS GPA to place:  44% pass rate
  • Students who placed via a math test:  77% pass rate

In fact, I am seeing indications in the data that the HS GPA should be used as a negating factor against placement tests … a score above a cutoff with a low HS GPA indicates a lack of ‘readiness’ to succeed.

In theory, a multiple-measures-formula could include negative impacts (in this case, HS GPA below 3.0).  In practice, this is not usually done.  [Another point:  multiple measures formulas are based on statistical analysis … and politics … which transforms a mathematical problem into statistics resulting in a ‘formula score’ which has no direct meaning to us or our students.  An irony within this statistical work is that the HS GPA lacks the interval quality needed to use a mean: the HS GPA itself is a bad measure, statistically.]

Regardless of formulas for multiple measures, we have sufficient data to conclude that HS GPA is well correlated with general college success as well as readiness in English but that HS GPA has little independent contribution in measuring math readiness.

Mathematics placement should be a function of inputs with established connections to mathematics.  The results should be easy to interpret for our students.  Any use of the HS GPA in mathematics placement violates principles of statistics and also contradicts research.

 

 

Core Deceits for Destroying Remediation

Back in 2012, several groups … including the Dana Center and Complete College America … published a statement entitled Core Principles for Transforming Remedial Education, a statement which has been used as an a priori proof of specific solutions to perceived problems in the profession of preparing students for success in college-level courses, for completion, and for a better life.

The core principles stated have been treated as research-based conclusions with a sound theoretical underpinning.  The purpose of this post is to look at the truth value of each statement — thus the title about ‘deceits’.

 

 

 

 

 

 

Here we go …

Principle 1: Completion of a set of gateway courses for a program of study is a critical measure of success toward college completion.

This is clearly a definition being proposed for research.  Certainly, completing gateway courses is a good thing.  “Success”?  Nope, at best this completion would be a measure of association; our students have complicated lives and very diverse needs.  For some of them, I would make the case that delaying their gateway courses is the best thing we can do; this step tends to lock them in to a program.  Curiously, the rhetoric attached to this principle states that remedial education does not build ‘momentum’.  This is clearly a marketing phrase based on appealing to emotional states in the reader.  Anybody who has been immersed in remedial education has seen more momentum than the authors of this statement have seen in gateway courses.

 

 

 

 

 

 

 

Principle 2: The content of required gateway courses should align with a student’s academic program of study — particularly in math.

“Alignment” is the silver bullet du jour.  Any academic problem is reduced to building proper ‘alignment’.  The word is ill-defined in general (unless we are speaking of automobiles), and is especially ill-defined in education.  The normal implementation is that the mathematics is limited to the applications a student will encounter in their program of study.  I’ve written about these issues (see At the Altar of Alignment and Alignment of Remediation with Student Programs).  In the context of this post, I’ll just add that the word alignment is like the word momentum — almost everybody likes the idea, though almost nobody can actually show what it is in a way that helps students.

The rationale for this deceit targets remedial mathematics as being the largest barrier to success.  If the phrase is directed at sequences of 3 or more remedial math courses, I totally agree — there is a significant research base for that conclusion.  There is no research base suggesting that 1 or 2 remedial math courses is the largest barrier to completion.

 

 

 

 

 

 

 

And …

Principle 3: Enrollment in a gateway college-level course should be the default placement for many more students.

This deceit is based on two cultural problems. One, an attack on using tests to place students in courses — both ‘english’ and mathematics.  In ‘english’, there is a good reason to question the use of tests for placement:  cultural bias is almost impossible to avoid.  For mathematics, there is less evidence of a problem.  However, the deceit suggests that both types of testing are ‘bad.  Another principle deceit addresses placement testing.

The second cultural problem is one of privilege:  parents from well-off areas with good schools are upset that their “precious children” are made to take a remedial course.  These parents question our opinions about what is best for students, and some of them are engaged with the policy influencers (groups such as those who drafted the ‘core principles’ document being discussed).  Of course, I have no evidence of these statements … just as the authors of the deceit have no evidence that it would be better with the default placement rule.

There is an ugly truth behind this deceit:  Especially in mathematics, we have tended to create poorly designed sequences of remedial courses which appear (to students and outsiders) to serve the primary purpose of weeding out the ‘unworthy’.  We have had a very poor record of accepting diversity, and little tolerance of ‘not quite ready’.  Decades of functioning in this mode left us vulnerable to the disruptive influences evidenced by the core deceits.

 

 

 

 

 

Next:

Principle 4: Additional academic support should be integrated with gateway college-level course content — as a co-requisite, not a prerequisite.

I am impressed by the redundancy ‘integrated’ and ‘co-requisite’.  This is a give-away that the authors are more concerned with rhetoric supporting their position than they are with actual students.  This call to use only co-requisite ‘remediation’ is also a call to kill off all stand-alone remediation.  I’ve also written on this before (see Segregation in College Mathematics: Corequisites! Pathways? and Where is the Position Paper on Co-Reqs? Math in the First Year? for starters).

Within mathematics, we would call principle 3 a ‘conjecture’ and principle 4 is a ‘corollary’.  This unnecessary repetition is a give-away that the argument to kill remedial courses is more important than improving education.  The groups who did the ‘core principles [sic: deceits]’ have been beating the drum head with ‘evidence’ that it works.  Underneath this core deceit is a very bad idea about mathematics:

The only justification for remediation is to prepare students for one college level math course (aligned with their program, of course 🙁 )

Remedial mathematics has three foundations — preparing students for their college math course, preparing students for other courses (science, technology, economics, etc), and preparing students for success in general.  Perhaps we have nothing to show for our efforts in the last item listed, but there are clear connections between remedial mathematics and other courses on the student’s program.  Co-requisite remediation is a closed-system solution to an open-system problem (see The Selfishness of the Corequisite Model).

 

 

 

 

 

 

 

Next:

Principle 5: Students who are significantly underprepared for college level academic work need accelerated routes into programs of study.

Conceptually, this principle is right on — there is no deceit in the basic idea.  The loophole is the one word ‘routes’.  The commentary in the principles document is appropriate vague about what it means, and I can give this one my seal of approval.

 

 

 

 

 

 

 

 

 

 

To continue …

Principle 6: Multiple measures should be used to provide guidance in the placement of students in gateway courses and programs of study.

This is the principle to follow-up on the default placement deceit.  Some of the discussion is actually good (about providing more support before testing, including review sources, to students).  The deceit in this principle comes in two forms — the direct attack on placement tests, and the unquestioning support of the HS GPA for college course placement.

The attack on placement tests has been vicious and prolonged.  People use words like ‘evil’; one of my administrators uses the word ‘nuances’ as code for ‘this is so evil I don’t have a polite word for it’. This attack on placement tests is a direct reason why we no longer have the “Compass” option.  The deceit itself is based on reasonably good research being generalized without rationale.  Specifically, the research constantly supports a better record in mathematics placement tests than in ‘english’, but the multiple measures propaganda includes mathematics.

The use of HS GPA in college course placement is a recent bad idea.  I’ve written about this in the past (see Does the HS GPA Mean Anything? and Placement Tests, HS GPA, and Multiple Measures … “Just the Facts” for starters).   Here is a recent scatterplot for data from my college:

 

 

 

 

The horizontal lines represent our placement rules.  The correlation in this data is 0.527; statistically significant but practically almost useless.  Our data suggests that using the HS GPA adds very little value to a placement rule; at the micro level, I use the HS GPA as a part of ‘multiple measures’ in forming groups … and have found that students would be better served if I had ignored the HS GPA.

The last:

Principle 7: Students should enter a meta-major when they enroll in college and start a program of study in their first year in order to maximize their prospects of earning a college credential.

Connected with this attack on remedial courses is a call for guided pathways, which is where the ‘meta-major’ comes from.  The narrative for this principle again uses the word ‘aligned’. In many cases (like my college), the ‘first year’ is implemented as a ‘take their credit math course in the first year’.  Again, I have addressed these concepts (see Where is the Position Paper on Co-Reqs? Math in the First Year?  and Policy based on Correlation: Institutionalizing Inequity.

 

 

 

 

 

 

 

 

Meta majors are a reasonable concept to use with our student population.  However, the normal implementation almost amounts to students selecting a meta-major because they like the graphical image we use.  In other cases, like the image shown here, meta-majors are just another confusing construct we try to get potential students to survive.

As is normal, we can find both some good truths and some helpful guidance … even within these 7 deceits about remediation.  Taken on its own merits, the document is flawed at basic levels, and would not survive (even in its final form) the normal review process for publication in most journals.

Progress is made based on deeper understanding of problems, building a conceptual and theoretical basis, and developing a community of practitioners.  The ‘7 deceits’ does little to contribute to that progress, and those deceits are normally used to destroy structures and courses.  Our students deserve better, and our institutions should be ashamed of using deceitful principles as the basis for any decision.

 

Does the HS GPA Mean Anything?

In the context of succeeding in college-level mathematics, does the HS GPA have any meaning?  Specifically, does a high school grade point average over some arbitrary value (such as 3.2) indicate that a given student can succeed in college algebra or quantitative reasoning with appropriate support?

Statistically, the HS GPA should not exist.  The reason is the the original measures (semester grades on a scale from 0 to 4 with 0.5 increments) is an ordinal measure; higher values are greater than smaller values.  A mean of a measure depends upon a presumption of “interval measures” — the difference between 0 and 1 is the same as the difference between 3 and 4.  The existence of GPA (whether HS or college) is based on convenient ignorance of statistics.

Given the long-standing existence of the HS GPA, one can not hope for leaders to suddenly recognize the basic error.  Therefore, let’s assume that the HS GPA is a statistically valid measure of SOMETHING.  What is that something?  Is there a connection between that something and readiness for college mathematics?

The structure of the data used for the HS GPA varies considerably by region and state.  In some areas, the HS GPA is the mean of 48 values … 6 courses at 2 semesters per year for 4 years.  If the school schedule allows for 7 classes, then there are 56 values; that type of variation is probably not very significant for our discussion.  The meaning of the HS GPA is more impacted by the nature of the 6 (or 7) classes each semester.  How many of these courses are mathematical in nature?  In most situations, at the current time, we might see 8 of the 48 grades coming from a mathematics course with another 4 to 8 coming from a science course.  Although most students take “algebra II” in high school, a smaller portion take a mathematically intense science course (such as physics).

In other words, we have a measure which has approximately a 20% basis in mathematics alone.  The other 80% represent “english”, social science, foreign language, technology, and various electives.  Would we expect this “20% weighting” to produce useful connections between HS GPA and readiness for college mathematics?  If these connections exist, we should see meaningful relationships between HS GPA and accepted measures of readiness.

So, I have spent some time looking at our local data.  We have only been collecting HS GPA data for a short time (less than one year), and this data can be compared to other measures.  Here are the correlation coefficients for the sample (n>600 for all combinations):

  • HS GPA with SAT Math: r = 0.377
  • HS GPA with Accuplacer College Level Math: r = 0.164
  • HS GPA with Accuplacer Algebra:   r = 0.338

Compare this with the correlations of the math measures:

  • SAT Math with Accuplacer College Level Math: r = 0.560
  • SAT Math with Accuplacer Algebra: r = 0.627
  • Accuplacer College Level Math with Accuplacer Algebra: r = 0.526

Of course, correlation coefficients are crude measures of association.  In some cases, the measures can have a useful association.  Here is a scatterplot of SAT Math by HS GPA:

 

 

 

 

 

 

 

 

 

The horizontal lines represent our cut scores for college level mathematics (550 for college algebra, 520 for quantitative reasoning). As you can see from this graph, the HS GPA is a very poor predictor of SAT Math.  We have, of course, examined the validity of the traditional measures of readiness for our college math courses.  The overall ranking, starting with the most valid, is:

  1. Accuplacer Algebra
  2. Accuplacer College Level Math
  3. SAT Math

The order of the first two differs whether the context is college algebra or quantitative reasoning.  In all cases, the measures show consistent validity to promote student success.

Here is a display of related data, this time relative to ACT Math and HS GPA.  The curves represent the probability of passing college algebra for scores on SAT Math.

 

 

 

 

 

 

 

 

 

[Source:  http://www.act.org/content/act/en/research/act-scores-and-high-school-gpa-for-predicting-success-at-community-colleges.html ]

For math, this graph is saying that basing a student’s chance of success just on the HS GPA is a very risky proposition.  Even at the extreme (a 4.0 HS GPA), the probability of passing college algebra ranges from about 20% to about 80%.  The ACT Math score, by itself, is a better predictor. The data suggests, in fact, that the use of the HS GPA should be limited to predicting who is not going to pass college algebra in spite of their ACT Math score … ACT Math 25 with HS GPA below 3.0 means “needs more support”.

So, back to the basic question: What does the HS GPA mean? Well, if one ignores the statistical violation, the HS GPA has considerable meaning — just not for mathematics.  The HS GPA has long been used as the primary predictor of “first year success in college” (often measured by 1st year GPA, another mistake).  Clearly, there is an element of “academic maturity or lack thereof” in the HS GPA measure.  The GPA below 3.0 seems to indicate insufficient academic maturity to succeed in a traditional college algebra course (see the graph above).

We know that mathematics forms a minor portion of the HS GPA for most students.  Although a small portion of students might have 50% of their HS GPA based on mathematically intense courses, the mode is probably closer to 20%.  Therefore, it is not surprising that the HS GPA is not a strong indicator of readiness for a given college mathematics course.

My college has recently implemented a policy to allow students with a HS GPA 2.6 or higher to enroll in our quantitative reasoning course, regardless of any math measures.  The first semester of data indicates that there may be a problem with this … about a third of these students passed the course, compared to the overall pass rate of about 75%.

I suggest that the meaning of the HS GPA is that the measure can identify students at risk, who perhaps should not be placed in college level math courses even if their test scores qualify them. In some cases, “co-requisite remediation” might be appropriate; in others, stand-alone developmental mathematics courses are more appropriate.  My conjecture is that this scheme would support student success:

  • Qualifying test score with HS GPA > 3.00, place into college mathematics
  • Qualifying test score with 2.6≤HS GPA<3.0, place into co-requisite if available, developmental if not
  • Qualifying test score with HS GPA < 2.6, place into developmental math

This, of course, is not what the “policy influencers” want to hear (ie, complete college america and related groups).  They suggest that we can solve a problem by both ignoring prior learning of mathematics and applying bad statistics.  Our responsibility, as professionals, is to articulate a clear assessment based on evidence to support the success of our students in their college program.

 

Placement Tests, HS GPA, and Multiple Measures … “Just the Facts”

We know that repeated statements are often treated as proven statements, even if the original version of the statement was not accurate.  In other words, if you want people to accept your point of view … don’t worry about whether it is accurate,  just make sure that your statement is repeated by lots of people over a period of time.  Like “HS GPA is a better predictor than placement tests”.

The original message seems to have been based upon a CCRC report (“High Stakes Placement … “, 2012 by Clayton). https://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html  The conclusion about tests versus HS GPA is this:

First, focusing on the first or second columns, which examine the predictive value of placement scores alone for slightly different samples, one can see that exam scores are much better predictors of math outcomes than English outcomes. The overall proportion of variation explained is 13 percent for a continuous measure of math grades, compared with only 2 percent for a continuous measure of English grades. This is consistent with the findings from previous research.

The date being referenced is this:

 

 

 

 

 

 

 

 

 

 

 

Not only is the ‘HS GPA is better’ not accurate for the original research (for mathematics), people never mention a fundamental issue:

 

The data for the 2012 study came from ONE “large urban community college system” (LUCCS)

Now, I don’t doubt the basic premise that including more variables can improve a decision (such as a test plus HS GPA).  The problem is that the message “HS GPA is better” has been repeated so often, by so many people, that decision makers accept it as truth.  The truthfullness depends a great deal on the decision being made — placement in English, or placement in Mathematics.  The situation looks pretty clear (in the LUCCS data) for English, where using the HS GPA only seems a better thing.  In Mathematics … not so much!

Researchers have developed models for placement in mathematics based on HS transcript data, though I’ve never seen a proven model using just HS GPA.  The variables connected to these research models involve:

  • Specific mathematics courses completed in high school (especially grades 11 and 12)
  • Specific grades received in those mathematics courses
  • As a minor factor, the overall HS GPA

A good prototype of this scheme is the California “MMAP” work; see http://rpgroup.org/Portals/0/Documents/Projects/MultipleMeasures/DecisionRulesandAnalysisCode/Statewide-Decision-Rules-5_18_16_1.pdf  .  Rather than the ‘drive-off-the-cliff’ approach (North Carolina, Florida, etc), this is a scientific approach to a complicated problem.  Few of our colleges, and few states, are willing to invest the resources necessary for this truth-in-multiple measures approach.  [The fact that California can do this seems to have been a consequence of decisions about higher education in that state 50 and 60 years ago.  We probably won’t see that again.]

Some additional truth about HS GPA:

High School GPA transmits inequity

Here is some data from the US Department of Education transcript study (2009):

 

 

 

 

 

 

 

 

 

 

 

The issue is not that HS GPA transmits inequity while placement tests do not.  In the case of SAT Math (and ACT Math) the gaps are known to exist.  The issue is that HS GPA transmits the inequity without regard to the student’s abilities in a subject domain.

The race/ethnicity ‘gaps’ for HS GPA are just one way to establish that it transmits inequity.  Economic and geographical inequities are also apparent in the HS GPA data.  At least the test developers strive to minimize their inequity; items which show a significant differential impact are removed from the tests.

Placement tests are less harmful to students than HS GPA.

The truth about multiple measures is that it will only help students when implemented in a scientific manner in the location or region involved.  HS GPA by itself will harm students.

 Join Dev Math Revival on Facebook:

WordPress Themes