Category: politics of developmental mathematics

HS GPA and Math Placement

In the policy world, “multiple measures” is the silver bullet for solving all issues of student placement in college.  Within the work of multiple measures, the HS GPA is presented as the most reliable measure related to student placement.  This conclusion is the result of some good research being used for disruptive purposes, where a core conclusion is generalized to mathematics when the data was directed at language (‘english’) placement.

A central reference in the multiple measures genre is the Scott-Clayton report from the CCRC ( https://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html ).  One of the key findings in that report is that placement tests have more validity in math than in english.  Other results include the fact that placement accuracy could be improved by including the HS GPA … especially in English.  However, the narrative since that time has repeated the unqualified claim — that HS GPA is a better predictor than placement tests.  Repetition of a false claim is a basic strategy in the world of propaganda.

In an earlier post, I shared a graphic on HS GPA vs ACT tests.

 

 

 

 

 

 

 

This data is from a large ACT study, which means that … if the GPA was a good predictor … we would see all ACT score ranges have a high probability of passing (B or better) in a college algebra course.  The fact that the two lower ACT ranges have an almost-zero rate of change contradicts that expectation.

Locally, I have looked at HS GPA versus math testing using SAT Math scores:

 

 

 

 

 

 

 

Although this graph does not look at ‘success’, we have plenty of other data to support the conclusion of Scott-Clayton — math placement tests have better validity than English tests.  [The horizontal reference lines in this graph represent the cutoffs for our math classes.]

One might make the argument that math tests work fine for algebraic-based math courses, and that HS GPA works better for general education math courses.  As it turns out, we have been using a HS GPA cutoff for our quantitative reasoning course (Math119) … which includes some algebra, but is predominantly numeracy.

Results:

  • Students who used their HS GPA to place:  44% pass rate
  • Students who placed via a math test:  77% pass rate

In fact, I am seeing indications in the data that the HS GPA should be used as a negating factor against placement tests … a score above a cutoff with a low HS GPA indicates a lack of ‘readiness’ to succeed.

In theory, a multiple-measures-formula could include negative impacts (in this case, HS GPA below 3.0).  In practice, this is not usually done.  [Another point:  multiple measures formulas are based on statistical analysis … and politics … which transforms a mathematical problem into statistics resulting in a ‘formula score’ which has no direct meaning to us or our students.  An irony within this statistical work is that the HS GPA lacks the interval quality needed to use a mean: the HS GPA itself is a bad measure, statistically.]

Regardless of formulas for multiple measures, we have sufficient data to conclude that HS GPA is well correlated with general college success as well as readiness in English but that HS GPA has little independent contribution in measuring math readiness.

Mathematics placement should be a function of inputs with established connections to mathematics.  The results should be easy to interpret for our students.  Any use of the HS GPA in mathematics placement violates principles of statistics and also contradicts research.

 

 

Core Deceits for Destroying Remediation

Back in 2012, several groups … including the Dana Center and Complete College America … published a statement entitled Core Principles for Transforming Remedial Education, a statement which has been used as an a priori proof of specific solutions to perceived problems in the profession of preparing students for success in college-level courses, for completion, and for a better life.

The core principles stated have been treated as research-based conclusions with a sound theoretical underpinning.  The purpose of this post is to look at the truth value of each statement — thus the title about ‘deceits’.

 

 

 

 

 

 

Here we go …

Principle 1: Completion of a set of gateway courses for a program of study is a critical measure of success toward college completion.

This is clearly a definition being proposed for research.  Certainly, completing gateway courses is a good thing.  “Success”?  Nope, at best this completion would be a measure of association; our students have complicated lives and very diverse needs.  For some of them, I would make the case that delaying their gateway courses is the best thing we can do; this step tends to lock them in to a program.  Curiously, the rhetoric attached to this principle states that remedial education does not build ‘momentum’.  This is clearly a marketing phrase based on appealing to emotional states in the reader.  Anybody who has been immersed in remedial education has seen more momentum than the authors of this statement have seen in gateway courses.

 

 

 

 

 

 

 

Principle 2: The content of required gateway courses should align with a student’s academic program of study — particularly in math.

“Alignment” is the silver bullet du jour.  Any academic problem is reduced to building proper ‘alignment’.  The word is ill-defined in general (unless we are speaking of automobiles), and is especially ill-defined in education.  The normal implementation is that the mathematics is limited to the applications a student will encounter in their program of study.  I’ve written about these issues (see At the Altar of Alignment and Alignment of Remediation with Student Programs).  In the context of this post, I’ll just add that the word alignment is like the word momentum — almost everybody likes the idea, though almost nobody can actually show what it is in a way that helps students.

The rationale for this deceit targets remedial mathematics as being the largest barrier to success.  If the phrase is directed at sequences of 3 or more remedial math courses, I totally agree — there is a significant research base for that conclusion.  There is no research base suggesting that 1 or 2 remedial math courses is the largest barrier to completion.

 

 

 

 

 

 

 

And …

Principle 3: Enrollment in a gateway college-level course should be the default placement for many more students.

This deceit is based on two cultural problems. One, an attack on using tests to place students in courses — both ‘english’ and mathematics.  In ‘english’, there is a good reason to question the use of tests for placement:  cultural bias is almost impossible to avoid.  For mathematics, there is less evidence of a problem.  However, the deceit suggests that both types of testing are ‘bad.  Another principle deceit addresses placement testing.

The second cultural problem is one of privilege:  parents from well-off areas with good schools are upset that their “precious children” are made to take a remedial course.  These parents question our opinions about what is best for students, and some of them are engaged with the policy influencers (groups such as those who drafted the ‘core principles’ document being discussed).  Of course, I have no evidence of these statements … just as the authors of the deceit have no evidence that it would be better with the default placement rule.

There is an ugly truth behind this deceit:  Especially in mathematics, we have tended to create poorly designed sequences of remedial courses which appear (to students and outsiders) to serve the primary purpose of weeding out the ‘unworthy’.  We have had a very poor record of accepting diversity, and little tolerance of ‘not quite ready’.  Decades of functioning in this mode left us vulnerable to the disruptive influences evidenced by the core deceits.

 

 

 

 

 

Next:

Principle 4: Additional academic support should be integrated with gateway college-level course content — as a co-requisite, not a prerequisite.

I am impressed by the redundancy ‘integrated’ and ‘co-requisite’.  This is a give-away that the authors are more concerned with rhetoric supporting their position than they are with actual students.  This call to use only co-requisite ‘remediation’ is also a call to kill off all stand-alone remediation.  I’ve also written on this before (see Segregation in College Mathematics: Corequisites! Pathways? and Where is the Position Paper on Co-Reqs? Math in the First Year? for starters).

Within mathematics, we would call principle 3 a ‘conjecture’ and principle 4 is a ‘corollary’.  This unnecessary repetition is a give-away that the argument to kill remedial courses is more important than improving education.  The groups who did the ‘core principles [sic: deceits]’ have been beating the drum head with ‘evidence’ that it works.  Underneath this core deceit is a very bad idea about mathematics:

The only justification for remediation is to prepare students for one college level math course (aligned with their program, of course 🙁 )

Remedial mathematics has three foundations — preparing students for their college math course, preparing students for other courses (science, technology, economics, etc), and preparing students for success in general.  Perhaps we have nothing to show for our efforts in the last item listed, but there are clear connections between remedial mathematics and other courses on the student’s program.  Co-requisite remediation is a closed-system solution to an open-system problem (see The Selfishness of the Corequisite Model).

 

 

 

 

 

 

 

Next:

Principle 5: Students who are significantly underprepared for college level academic work need accelerated routes into programs of study.

Conceptually, this principle is right on — there is no deceit in the basic idea.  The loophole is the one word ‘routes’.  The commentary in the principles document is appropriate vague about what it means, and I can give this one my seal of approval.

 

 

 

 

 

 

 

 

 

 

To continue …

Principle 6: Multiple measures should be used to provide guidance in the placement of students in gateway courses and programs of study.

This is the principle to follow-up on the default placement deceit.  Some of the discussion is actually good (about providing more support before testing, including review sources, to students).  The deceit in this principle comes in two forms — the direct attack on placement tests, and the unquestioning support of the HS GPA for college course placement.

The attack on placement tests has been vicious and prolonged.  People use words like ‘evil’; one of my administrators uses the word ‘nuances’ as code for ‘this is so evil I don’t have a polite word for it’. This attack on placement tests is a direct reason why we no longer have the “Compass” option.  The deceit itself is based on reasonably good research being generalized without rationale.  Specifically, the research constantly supports a better record in mathematics placement tests than in ‘english’, but the multiple measures propaganda includes mathematics.

The use of HS GPA in college course placement is a recent bad idea.  I’ve written about this in the past (see Does the HS GPA Mean Anything? and Placement Tests, HS GPA, and Multiple Measures … “Just the Facts” for starters).   Here is a recent scatterplot for data from my college:

 

 

 

 

The horizontal lines represent our placement rules.  The correlation in this data is 0.527; statistically significant but practically almost useless.  Our data suggests that using the HS GPA adds very little value to a placement rule; at the micro level, I use the HS GPA as a part of ‘multiple measures’ in forming groups … and have found that students would be better served if I had ignored the HS GPA.

The last:

Principle 7: Students should enter a meta-major when they enroll in college and start a program of study in their first year in order to maximize their prospects of earning a college credential.

Connected with this attack on remedial courses is a call for guided pathways, which is where the ‘meta-major’ comes from.  The narrative for this principle again uses the word ‘aligned’. In many cases (like my college), the ‘first year’ is implemented as a ‘take their credit math course in the first year’.  Again, I have addressed these concepts (see Where is the Position Paper on Co-Reqs? Math in the First Year?  and Policy based on Correlation: Institutionalizing Inequity.

 

 

 

 

 

 

 

 

Meta majors are a reasonable concept to use with our student population.  However, the normal implementation almost amounts to students selecting a meta-major because they like the graphical image we use.  In other cases, like the image shown here, meta-majors are just another confusing construct we try to get potential students to survive.

As is normal, we can find both some good truths and some helpful guidance … even within these 7 deceits about remediation.  Taken on its own merits, the document is flawed at basic levels, and would not survive (even in its final form) the normal review process for publication in most journals.

Progress is made based on deeper understanding of problems, building a conceptual and theoretical basis, and developing a community of practitioners.  The ‘7 deceits’ does little to contribute to that progress, and those deceits are normally used to destroy structures and courses.  Our students deserve better, and our institutions should be ashamed of using deceitful principles as the basis for any decision.

 

Math Education: Changes, Progress and the Club

What works?  How do we help more of our students … all students … achieve goals supported by understanding mathematics in college?  To answer those questions, we need to correct our confusion between measurements and objects; we need to develop cohesive theories based on scientific knowledge.  We are far from that stage, and policy makers tend to be just as ill-equipped.

Co-requisite remediation in mathematics has been broadly implemented.  The process which led to widespread use of a non-solution illustrates some of my points.

 

 

 

 

 

 

 

 

 

The rush towards co-requisite models is based on two patterns in ‘data’:

  • Students who place into remedial mathematics courses tend to have significantly lower completion rates in programs
  • Students who pass a college math course in their first year tend to have significantly higher completion rates in programs

These patterns in the data are not in dispute.  The reason I call co-requisite remediation a non-solution, however, is based on the faulty reasoning which bases a solution on these ‘facts’.  I find it surprising that our administrators accept this treatment for a problem which never addresses the needs of the students — it’s all about responding to the data.  That approach, in fact, is a political method for resolving a disagreement where people on one side hold the power to make decisions.

 

 

 

 

 

 

 

 

 

Eliminating stand-alone developmental math courses will not enable more students to complete their college program, beyond the 15% to 25% who were underplaced by testing — and then only to the extent that this group failed to complete their remedial math courses.  The data on this question usually shows that the issue is not failing remedial math courses; it’s retention or attempt.

Tennessee has been the most publicized ‘innovator’, and I have been addressed with the “club” that the leader of that effort was a mathematician — with the handle of the club saying “He’s a mathematician; and he thinks this is a good solution!”  Like mathematicians are never wrong :(.  Some evidence on related work in Tennessee is showing a lack of impact (https://www.insidehighered.com/news/2018/10/29/few-achievement-gains-tennessee-remedial-education-initiative).  I would expect that evidence to show that the main benefit of corequisite remediation is an increase in the proportion of students getting credit for a college math course in either statistics or quantitative reasoning (QR)/liberal arts (LAM).

When data shows that many students can pass a college math course in statistics, QR, or LAM I don’t see any evidence of corequisite working.  Many of my trusted colleagues believe that stat, QR and LAM have very minimal mathematics required; some believe that the only real requirement is a pulse as long as there is effort.

Why do students placed in remedial mathematics have lower completion rates?  The properties of this group of students is not homogeneous.  For some of them, they have reasonably solid mathematical knowledge that is ‘rusty’ … a short treatment, or just opportunity to review, is sufficient.  A significant portion have a combination of ‘rust’ and ‘absence’; the absence usually involves core ideas of fraction, percent, and basic algebraic reasoning.  Corequisite remediation presumes to address these needs via a ‘just in time’ approach; ‘rust’ responds well to short-term treatments — absence not so much.  Another portion of remedial math students have mostly ‘absence’ of mathematical knowledge.

I’ve written previously on the question of ‘when will co-requisites work’ (Co-requisite Remediation: When it’s likely to work).  My comments here are more related to the meaning of ‘work’ — what is the problem we are addressing, and what measures show the degree to which we are solving it?

One of the primary reasons we have today’s mess in college mathematics is … ourselves.  For decades, we presented an image of mathematics where calculus was the default goal as shown in the image at the start of this post.  This image is so flawed that external attacks had an easy time imposing a political solution to an educational problem:  avoidance.  Of course, courses not preparing students for calculus are generally ‘easier’ to complete.  That is not the problem.  The problem is our lack of understanding related to the mathematical needs of students beyond what we find in guided pathways.

To clarify, the ‘our’ in that last sentence is a reference to everybody engaged with both policy and curriculum.  As an example of the lack of understanding, consider the question of the mathematics needed for biology in general and future medical doctors in particular.  We hear comments like “doctors don’t use calculus” on the job; I’m pretty confident that my doctor is not finding derivatives nor integrating a rational function when she determines appropriate treatment options.  However, mastering modern biology depends on conceptual understanding of rates of change along with decent modeling skills.  The problem is not that doctors don’t use calculus on the job; the problem is that our mathematics courses do not address their mathematical needs.

So, back to the two data points mentioned earlier.  The second of these dealt with an observation that students who complete a college mathematics in their first year are more likely to complete their program.  This type of data is often used as a second rationale in the political effort to impose co-requisite remediation … get more students into that college math course right away.  Of course, the reasoning is based on confusing correlation with causation; do groups of similar students have better completion with a college math course right away compared to delays?  Historically, students who are able to complete a college math course in the first year have one or more of these advantages:

  • Higher quality opportunities in their K-12 experience
  • Better family support
  • Lack of social risk factors
  • Presence of enabling economic factors

If these factors are involved, then we would expect a ‘math in first year’ effort to primarily mean that more students complete a college math course — not that more students complete their program.  This is, in fact, what the latest Tennessee data shows.  I have not been able to find much research on the question of whether ‘first year math’ produces the results I predict versus what the policy marketing teams suggest.  If you know of any research on this, please pass it along.

 

 

 

 

 

 

 

 

We need to own our problems.  These problems include an antiquated curriculum in college mathematics combined with instructional practices which tend to not serve the majority of students.  Clubs with data engraved on the handle are poor choices for the forces needed to make changes; understanding based on long-term scientific theories provides a sustainable basis for progress which will serve all of our students.

 

Increasing Success in Developmental Mathematics: Do we Have Any Idea?

Back in 2007, I had a conversation with two other leaders in AMATYC (Rikki Blair and Rob Kimball) concerning what we should do to make major and quantum improvements in developmental mathematics.  We shared a belief that the existing developmental mathematics courses were a disservice to students, and would remain a disservice even if we achieved 100% pass rates in those courses.  That conversation … and a whole lot of hard work … eventually led to a convening of content experts in 2009 (Seattle), the launching of the AMATYC New Life Project, and the beginnings of Carnegie Statway™ & Quantway™ as well as the Dana Center Mathways.

Imagine my surprise, then, when a publication was released dealing with “Increasing Success in Developmental Mathematics” by the National Academies of Science (see https://www.nap.edu/catalog/25547/increasing-student-success-in-developmental-mathematics-proceedings-of-a-workshop) based on a workshop this past winter.  It is true that the planning team for this workshop did not invite me; they know that I am close to retirement, and travel has become less comfortable, so my absence is actually fine with me.  No, the surprises deal with the premise and outcomes of the workshop.

One premise of the workshop included this phrase:

” … particular attention to students who are unsuccessful in developmental mathematics …”

This phrase reflects a very biased point of view — that there is something about students which causes them to be unsuccessful in some developmental mathematics (course? courses?).  This is the biggest surprise in the report, and not a good surprise: we continue to believe and act as if there are properties of students which mean that we should change our treatment.

If you read that last sentence carefully, you may be surprised yourself at the fact that I am surprised.  After all, it’s very logical that our instructional and/or curricular treatment should vary based on some collection of student characteristics.  Yes, it is logical.  Unfortunately, the statement is anything but reasonable.  Matching instructional methods, or curricular treatments, is an age-old problem of education.  Research on the efficacy of trait-treatment matching has a long and somewhat depressing history, going back at least to the early 1970s.  (http://www.ets.org/research/policy_research_reports/publications/report/1972/hrav)

 

 

 

 

 

 

 

 

 

Trait-Treatment interactions can be managed well enough at micro-levels.  My classes, like most of ours, adjust for student differences.  The problems arise when we seek to match at higher levels of abstraction — whether it’s “learning styles” or “meta majors”.  What we know about traits quickly becomes overwhelmed by the unknown and unknowable.  A similar flaw becomes apparent when we try to impose a closed system (ALEKS for example) on an organic component of a process (a student).  Sure, if I had a series of one-on-one interviews (perhaps 10 hours worth), I could match a student with a treatment that is very likely to work for them — given what was known at the time.  That knowledge might become horribly incomplete within days, depending on the nature of changes the student is experiencing.

 

 

 

This is an optimistic ‘theoretical’ error rate:  are you willing to be one of the “30%”?

 

 

The report itself spends quite a bit of time describing and documenting ‘promising’ practices.  The most common of these seems to involve one of various forms of tracking — a student is in major A, or in meta-major α, so the math course is matched to the ‘needs’ of that major or meta-major.  I suspect that several people attending the workshop would disagree with my use of the word “tracking”; they might prefer the word “aligned with the major”.  I am surprised at the ease of which we allow this alignment/tracking to determine what is done in mathematics classes.  Actually, I should use the word “discouraged” — because in most cases the mathematics chosen for a major deals with the specific quantitative work a student needs, often focusing on a minimal standard.  Are we really this willing to surrender our discipline?  Does the phrase “mathematically educated” now mean “avoids failing a low standard”?

To the extent the report deals with specific ways to ‘increase’ success in mathematics, I would rate the report as a failure.  However, when the report deals with concepts underlying our future work, you will find valid statements.  One of my favorites is from Linda Braddy:

Braddy asserted that administrators and educators are guilty of “educational malpractice” if they do not stop offering outdated, ineffective systems
of mathematics instruction.  [pg 56]

I suspect the authors would not agree with me on what we should stop doing.  I also don’t know how to interpret the comma in “outdated, ineffective” — does something need to meet both conditions in order to be malpractice?  Should we insert an “or” in the statement where a comma shows?  How about if we drop the word “instruction”?  Seems like we should also address the outdated mathematics in our courses.

Although the workshop never claimed to be inclusive, I am also disappointed that the AMATYC New Life Project never gets mentioned.  Not only did our Project produce as many (or more) implementations as the specifics described in the report, the genetic material from the Project was used to begin two efforts which are mentioned.  The result is a report which supports that notion that AMATYC has done nothing to advance ‘success in developmental mathematics’ in the past 12 years.

 

WordPress Themes