HS GPA and Math Placement

In the policy world, “multiple measures” is the silver bullet for solving all issues of student placement in college.  Within the work of multiple measures, the HS GPA is presented as the most reliable measure related to student placement.  This conclusion is the result of some good research being used for disruptive purposes, where a core conclusion is generalized to mathematics when the data was directed at language (‘english’) placement.

A central reference in the multiple measures genre is the Scott-Clayton report from the CCRC ( https://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html ).  One of the key findings in that report is that placement tests have more validity in math than in english.  Other results include the fact that placement accuracy could be improved by including the HS GPA … especially in English.  However, the narrative since that time has repeated the unqualified claim — that HS GPA is a better predictor than placement tests.  Repetition of a false claim is a basic strategy in the world of propaganda.

In an earlier post, I shared a graphic on HS GPA vs ACT tests.

 

 

 

 

 

 

 

This data is from a large ACT study, which means that … if the GPA was a good predictor … we would see all ACT score ranges have a high probability of passing (B or better) in a college algebra course.  The fact that the two lower ACT ranges have an almost-zero rate of change contradicts that expectation.

Locally, I have looked at HS GPA versus math testing using SAT Math scores:

 

 

 

 

 

 

 

Although this graph does not look at ‘success’, we have plenty of other data to support the conclusion of Scott-Clayton — math placement tests have better validity than English tests.  [The horizontal reference lines in this graph represent the cutoffs for our math classes.]

One might make the argument that math tests work fine for algebraic-based math courses, and that HS GPA works better for general education math courses.  As it turns out, we have been using a HS GPA cutoff for our quantitative reasoning course (Math119) … which includes some algebra, but is predominantly numeracy.

Results:

  • Students who used their HS GPA to place:  44% pass rate
  • Students who placed via a math test:  77% pass rate

In fact, I am seeing indications in the data that the HS GPA should be used as a negating factor against placement tests … a score above a cutoff with a low HS GPA indicates a lack of ‘readiness’ to succeed.

In theory, a multiple-measures-formula could include negative impacts (in this case, HS GPA below 3.0).  In practice, this is not usually done.  [Another point:  multiple measures formulas are based on statistical analysis … and politics … which transforms a mathematical problem into statistics resulting in a ‘formula score’ which has no direct meaning to us or our students.  An irony within this statistical work is that the HS GPA lacks the interval quality needed to use a mean: the HS GPA itself is a bad measure, statistically.]

Regardless of formulas for multiple measures, we have sufficient data to conclude that HS GPA is well correlated with general college success as well as readiness in English but that HS GPA has little independent contribution in measuring math readiness.

Mathematics placement should be a function of inputs with established connections to mathematics.  The results should be easy to interpret for our students.  Any use of the HS GPA in mathematics placement violates principles of statistics and also contradicts research.

 

 

Include ‘statistics’ in a math course? Maybe not :(

The new prep curriculum in mathematics uses a course like Mathematical Literacy as the starting point, where the course is not just about algebra.  The ‘average’ math lit course includes a little bit of statistics, though it’s not clear that this is helping students.

A recent test in my Math Lit course included this question:

According to a study of player salaries back in 1998, here are three bits of information:

Mean Salary: $2.2 million       Median: $1.3 million              Mode: $272,000

One source made a claim that “most NBA players made more than $2 million” that year.
This claim is false.  Explain why; use a complete sentence.

This was question #2; question #1 was to find the 3 ‘averages’ for a given set of values.  As usual, the class did very well on those computational questions.  If somebody asks them for a median, there is a pretty good chance that the students will do fine.

However, when asked to use that same information to support an argument … well, let’s just say that my students did pretty lousy.  One student out of 2 classes (47 students) did a decent job — and all students would have encountered this type of question in their ‘homework’.

The word “most” was the key challenge, because students automatically connected that word with the mode — which is not an answer for this question at all. A good portion of the students actually presented an argument that the claim was true — based on the mean.

We are teaching more statistics (integrated in other courses, specifically) because we are led to believe that everybody will be ‘using’ statistics.  I certainly agree that we are all subjected to good and bad uses of statistics in the everyday navigation of western society.  Does having computational fluency with ‘averages’ contribute to statistical education?  No, not for the majority of our students.

I think we add statistics (and/or probability) to our classes because doing so allows us to believe that we are providing something of value to our students.  A little bit of statistics might very well be worse than no statistics at all.

 

Core Deceits for Destroying Remediation

Back in 2012, several groups … including the Dana Center and Complete College America … published a statement entitled Core Principles for Transforming Remedial Education, a statement which has been used as an a priori proof of specific solutions to perceived problems in the profession of preparing students for success in college-level courses, for completion, and for a better life.

The core principles stated have been treated as research-based conclusions with a sound theoretical underpinning.  The purpose of this post is to look at the truth value of each statement — thus the title about ‘deceits’.

 

 

 

 

 

 

Here we go …

Principle 1: Completion of a set of gateway courses for a program of study is a critical measure of success toward college completion.

This is clearly a definition being proposed for research.  Certainly, completing gateway courses is a good thing.  “Success”?  Nope, at best this completion would be a measure of association; our students have complicated lives and very diverse needs.  For some of them, I would make the case that delaying their gateway courses is the best thing we can do; this step tends to lock them in to a program.  Curiously, the rhetoric attached to this principle states that remedial education does not build ‘momentum’.  This is clearly a marketing phrase based on appealing to emotional states in the reader.  Anybody who has been immersed in remedial education has seen more momentum than the authors of this statement have seen in gateway courses.

 

 

 

 

 

 

 

Principle 2: The content of required gateway courses should align with a student’s academic program of study — particularly in math.

“Alignment” is the silver bullet du jour.  Any academic problem is reduced to building proper ‘alignment’.  The word is ill-defined in general (unless we are speaking of automobiles), and is especially ill-defined in education.  The normal implementation is that the mathematics is limited to the applications a student will encounter in their program of study.  I’ve written about these issues (see At the Altar of Alignment and Alignment of Remediation with Student Programs).  In the context of this post, I’ll just add that the word alignment is like the word momentum — almost everybody likes the idea, though almost nobody can actually show what it is in a way that helps students.

The rationale for this deceit targets remedial mathematics as being the largest barrier to success.  If the phrase is directed at sequences of 3 or more remedial math courses, I totally agree — there is a significant research base for that conclusion.  There is no research base suggesting that 1 or 2 remedial math courses is the largest barrier to completion.

 

 

 

 

 

 

 

And …

Principle 3: Enrollment in a gateway college-level course should be the default placement for many more students.

This deceit is based on two cultural problems. One, an attack on using tests to place students in courses — both ‘english’ and mathematics.  In ‘english’, there is a good reason to question the use of tests for placement:  cultural bias is almost impossible to avoid.  For mathematics, there is less evidence of a problem.  However, the deceit suggests that both types of testing are ‘bad.  Another principle deceit addresses placement testing.

The second cultural problem is one of privilege:  parents from well-off areas with good schools are upset that their “precious children” are made to take a remedial course.  These parents question our opinions about what is best for students, and some of them are engaged with the policy influencers (groups such as those who drafted the ‘core principles’ document being discussed).  Of course, I have no evidence of these statements … just as the authors of the deceit have no evidence that it would be better with the default placement rule.

There is an ugly truth behind this deceit:  Especially in mathematics, we have tended to create poorly designed sequences of remedial courses which appear (to students and outsiders) to serve the primary purpose of weeding out the ‘unworthy’.  We have had a very poor record of accepting diversity, and little tolerance of ‘not quite ready’.  Decades of functioning in this mode left us vulnerable to the disruptive influences evidenced by the core deceits.

 

 

 

 

 

Next:

Principle 4: Additional academic support should be integrated with gateway college-level course content — as a co-requisite, not a prerequisite.

I am impressed by the redundancy ‘integrated’ and ‘co-requisite’.  This is a give-away that the authors are more concerned with rhetoric supporting their position than they are with actual students.  This call to use only co-requisite ‘remediation’ is also a call to kill off all stand-alone remediation.  I’ve also written on this before (see Segregation in College Mathematics: Corequisites! Pathways? and Where is the Position Paper on Co-Reqs? Math in the First Year? for starters).

Within mathematics, we would call principle 3 a ‘conjecture’ and principle 4 is a ‘corollary’.  This unnecessary repetition is a give-away that the argument to kill remedial courses is more important than improving education.  The groups who did the ‘core principles [sic: deceits]’ have been beating the drum head with ‘evidence’ that it works.  Underneath this core deceit is a very bad idea about mathematics:

The only justification for remediation is to prepare students for one college level math course (aligned with their program, of course 🙁 )

Remedial mathematics has three foundations — preparing students for their college math course, preparing students for other courses (science, technology, economics, etc), and preparing students for success in general.  Perhaps we have nothing to show for our efforts in the last item listed, but there are clear connections between remedial mathematics and other courses on the student’s program.  Co-requisite remediation is a closed-system solution to an open-system problem (see The Selfishness of the Corequisite Model).

 

 

 

 

 

 

 

Next:

Principle 5: Students who are significantly underprepared for college level academic work need accelerated routes into programs of study.

Conceptually, this principle is right on — there is no deceit in the basic idea.  The loophole is the one word ‘routes’.  The commentary in the principles document is appropriate vague about what it means, and I can give this one my seal of approval.

 

 

 

 

 

 

 

 

 

 

To continue …

Principle 6: Multiple measures should be used to provide guidance in the placement of students in gateway courses and programs of study.

This is the principle to follow-up on the default placement deceit.  Some of the discussion is actually good (about providing more support before testing, including review sources, to students).  The deceit in this principle comes in two forms — the direct attack on placement tests, and the unquestioning support of the HS GPA for college course placement.

The attack on placement tests has been vicious and prolonged.  People use words like ‘evil’; one of my administrators uses the word ‘nuances’ as code for ‘this is so evil I don’t have a polite word for it’. This attack on placement tests is a direct reason why we no longer have the “Compass” option.  The deceit itself is based on reasonably good research being generalized without rationale.  Specifically, the research constantly supports a better record in mathematics placement tests than in ‘english’, but the multiple measures propaganda includes mathematics.

The use of HS GPA in college course placement is a recent bad idea.  I’ve written about this in the past (see Does the HS GPA Mean Anything? and Placement Tests, HS GPA, and Multiple Measures … “Just the Facts” for starters).   Here is a recent scatterplot for data from my college:

 

 

 

 

The horizontal lines represent our placement rules.  The correlation in this data is 0.527; statistically significant but practically almost useless.  Our data suggests that using the HS GPA adds very little value to a placement rule; at the micro level, I use the HS GPA as a part of ‘multiple measures’ in forming groups … and have found that students would be better served if I had ignored the HS GPA.

The last:

Principle 7: Students should enter a meta-major when they enroll in college and start a program of study in their first year in order to maximize their prospects of earning a college credential.

Connected with this attack on remedial courses is a call for guided pathways, which is where the ‘meta-major’ comes from.  The narrative for this principle again uses the word ‘aligned’. In many cases (like my college), the ‘first year’ is implemented as a ‘take their credit math course in the first year’.  Again, I have addressed these concepts (see Where is the Position Paper on Co-Reqs? Math in the First Year?  and Policy based on Correlation: Institutionalizing Inequity.

 

 

 

 

 

 

 

 

Meta majors are a reasonable concept to use with our student population.  However, the normal implementation almost amounts to students selecting a meta-major because they like the graphical image we use.  In other cases, like the image shown here, meta-majors are just another confusing construct we try to get potential students to survive.

As is normal, we can find both some good truths and some helpful guidance … even within these 7 deceits about remediation.  Taken on its own merits, the document is flawed at basic levels, and would not survive (even in its final form) the normal review process for publication in most journals.

Progress is made based on deeper understanding of problems, building a conceptual and theoretical basis, and developing a community of practitioners.  The ‘7 deceits’ does little to contribute to that progress, and those deceits are normally used to destroy structures and courses.  Our students deserve better, and our institutions should be ashamed of using deceitful principles as the basis for any decision.

 

Nested and Sequential: Not in Math, or “What’s Wrong with ALEKS”?

Much of our mathematics curriculum is based on a belief in the ‘nested and sequential’ nature of our content — Topic G requires knowledge of Topics A to F; mastery of topic G therefore implies mastery of topics A to F.   A popular platform (ALEKS) takes this as a fundamental design factor; students take a linear series of n steps through the curriculum, and can only see items which the system judges they have shown readiness for.

 

 

 

 

 

 

 

 

 

 

Other disciplines do not maintain such a restricted vision of their content, whether we are talking about ‘natural’ sciences or social sciences … even foreign language curricula are not as “OCD” as math has been.  As a matter of human learning, I can make the case that learning topic G will help students master topics A to F; limiting their access to topic G will tend to cause them to struggle or not complete a math class.

Whether we are talking about a remedial topic (such as polynomial operations) or a pre-calculus topic (function analysis), the best case we can make is that the topics are connected — understanding each one relates to understanding the other.  Certainly, if a student totally lacks understanding of a more basic idea it makes sense to limit their access to the more advanced idea.  However, this is rarely the situation we face in practice:  It’s almost always a question of degree, not the total absence of knowledge.

At my institution, this actually relates to our general education approach (as it probably does at most institutions).  In our case, we established our requirements about 25 years ago; the mathematics standard (at that time) was essentially intermediate algebra.  The obvious question was “how about students who can place into pre-calculus or higher”.  One of my colleagues responded with “these courses are nested and sequential; passing pre-calculus directly implies mastery of intermediate algebra”.  My judgment is that this was incorrect, and still is incorrect.  Certainly, there is a connection between the two — we might even call it a direct correlation.  However, this correlation is far from perfect.

Learning is a process which involves forward movement as well as back-tracking.  We are constantly discovering something about an earlier topic that we did not really understand, and this is discovered when we attempt a connected topic dependent on that understanding.

Some of my colleagues are very concerned about equity, especially as it relates to race, ethnicity, and social status.  Using a controlled sequence model has the direct consequence of limiting access to more advanced topics and college-level courses for groups of concern … students in these groups have a pronounced tendency to arrive at college with ‘gaps’ in their knowledge.  A mastery approach, although a laudable goal, is not a supportive method for many students.

In some ways, co-requisite courses are designed based on this mis-conception — we ‘backwards design’ the content in the co-req class so that the specific pre-requisite topics are covered and mastered.  I don’t expect that these courses actually have much impact on student learning in the college-level courses.

Back in the ‘old days’ (the 1970s) a big thing was programmed learning, and even machine learning.  The whole approach was based on a nested and sequential view of the content domain.  My department used some of those programmed learning materials, though not for long — the learning was not very good, and the student frustration was high.

Our courses, and our software (such as ALEKS), are too often based on a nested and sequential vision of content — as opposed to a learning opportunity approach.  By using a phrase “knowledge spaces”, ALEKS attempts to sell us a set of products based on a faulty design.  Yes, I know … people “like ALEKS” and “it works”.  My questions are “do we like ALEKS because we don’t need to worry about basic decisions for learning?” and “do we think it works because students improve their procedural knowledge, or do they make any progress at all in their mathematical reasoning?”

Obviously, there if a basic fault with a suggestion to remove the progressive nature of our curriculum … there are some basic dependencies which can not be ignored.  However, that is not the same as saying that students need to have mastery of every piece or segment of the curriculum.  No, the issue is:

Do students have SUFFICIENT understanding of prerequisite knowledge so that they can learn the ‘new’ stuff?

This ‘sufficient understanding’ is the core question in course placement, which I have addressed repeatedly in prior posts.  I am suggesting that the ambiguity of that process (we can never be certain) is also valid at the level of topics within a course.  It is easy to prove by counter-example that students do not need to have mastered all of the prior mathematics before succeeding; they don’t even need to necessary have the majority of that mathematics.  Learning mathematics is way more messy — and much more exciting — than the simplistic ‘nested and sequential’ view.

There is a substantial literature based on ‘global learners’.  I definitely prefer the concept of ‘global learning’, as I think our own ‘styles’ vary with the context.  However, that literature might help you understand the ‘ambiguities’ I refer to; see https://www.vaniercollege.qc.ca/pdo/2013/11/teaching-tip-ways-of-knowing-sequential-vs-global-learners/  as a starting point.  As a side comment, ‘global learning’ is also used to describe the goal of having students gain a better understanding of ‘global’ societies, cultures, and countries; in that context, they really mean ‘world’ not ‘global’ (global refers to a physical shape, while ‘world’ refers to inhabitants).

 

 

 

 

 

 

 

A nested and sequential structure, by design, limits opportunities to learn.  This, in turn, ensures that we will fail to serve students who did not have good learning opportunities in their K-12 education.  Just because we can lay out a logical structure for topics and courses from a nested & sequential point of view does NOT mean that this is a workable approach for our students.

Drop as much of the sequential limitations as you can, and start having more fun with the excitement of having more learning for our students.

 

WordPress Themes