Data – Statistics – The CCA Mythologies (or Lies?)

Decision makers (and policy makers) are being heavily influenced by some groups with an agenda.  One of those groups (Complete College America, or CCA) states goals which most of us would support … higher rates of completion, a better educated workforce, etc.

A recent blog post by the CCA was entitled “The results are in. Corequisite remediation works.” [See http://completecollege.org/the-results-are-in-corequisite-remediation-works/]  CCA sponsored a convening in Minneapolis on corequisite remediation, where presenters shared data on their results.  Curiously, all of the presenters cited in the blog post had positive ‘results’ for their co-requisite model; clearly, this means that the idea works!  [Or, perhaps not.]

The textbook we use for our Quantitative Reasoning course includes a description of ‘results’ that were very (VERY) wrong.  A survey was done using phone number lists and country club membership lists, back in the 1930s when few people had telephones.  The survey showed a clear majority favoring one candidate, and they had over 2 million responses.  When the actual election was held, the other candidate one by a larger margin than was predicted in the survey.

The lesson from that story is “we need a representative sample” before using the data in any way.  The CCA results fail this standard badly; their ‘results’ are anecdotes.  These anecdotes do, in fact, suggest a connection between co-requisite approaches and better student outcomes.  College leaders, including my own provost, report that they “have seen the numbers’ showing that this method is promising.

Based on that exposure to questionable data, we are running an experiment this summer … students assessed at the basic arithmetic level are taking an intermediate algebra course, with extended class time and a supplemental instruction leader.  Currently, we still allow students to use an intermediate algebra course to meet a general education requirement, so this experiment has the potential to save those 11 students two (or more) semesters of mathematics.  I have not heard how this experiment is going.

Curiously, data on other ‘results’ would not be used to justify a change by decision makers and policy experts.  People have used data to support some extreme and wrong notions in the past, and are still doing so today.  The difference with the CCA is that they have some methodologies that automatically achieve validity, with the co-requisite remediation models at the top of this list.

Scientists and statisticians would never reach any conclusions based on one set of data.  We develop hypotheses, we build on prior research, we refine theories … all with the goal of better understanding the mechanisms involved in a situation.  Co-requisite remediation might very well work great — but not for all students, because very little ‘works’ for all students.  The question is which students, which situations, and what conditions will result in the benefits we seek.

The CCA claim that “corequisite remediation works” is a lie, communicated with a handful of numbers which support the claim, and this lie is then repeated & repeated again as a propaganda method to advance an agenda.  We need to be aware of this set of lies, and do our best to help decision makers and policy experts understand the scientific method to solving problems.

Instead of using absolute statements, which will not be true but are convenient, we need to remain true to the calling of helping ALL students succeed.  Some students’ needs can be met by concurrent remediation; we need to understand which students and find tools to match the treatment to them.  Our professionalism demands that we resist absolutes, point out lies, and engage in the challenging process of using statistics to improve what we do.

 Join Dev Math Revival on Facebook:

6 Comments

  • By schremmer, June 25, 2015 @ 11:35 am

    I have no idea what “corequisite remediation” is really about. I looked up the site and all they are saying is “it” works.

    1. If it is used to help students go through a course which is already a cookbook, it cannot possibly work because it amounts to twice as big a cookbook. In other words, 0+0=0

    2. On the other hand, once upon a time, I developed a calculus course a bit on the line of Osserman’s Two Dimensional Calculus. The idea was that one dimension was collapsing too many things while three dimensions were too many to be handled by beginners. Of course, it seemed to work but the physics department which wanted to have its own calculus sequence used the occasion. The reason I am mentioning it is that because we were focussing on understanding and visualizing concepts, we did not have time for the nitty gritty of calculations. So we developed a Self Help In Techniques to which we could constantly refer to. The students were interested in _using_ these techniques which they had forgotten in a context that made sense to them and had no interest in investing any time in understanding the techniques themselves.

    The point was that the conceptual sum total of the course was far from being 0.

    As for the statistical validation of an approach, it should come after a case has been made that the approach has a fair chance to work.

  • By Jack Rotman, June 26, 2015 @ 9:00 am

    I’m making a post on ‘what corequisite remediation is’ … and when it is likely to work.

  • By Laura, June 26, 2015 @ 1:57 pm

    Jack,

    This is a powerful piece of writing and you are correct about the power of lies like these to drive policy. I work in a CCA state and we have been fighting very hard to use models that work for our students, many of whom are from small rural underfunded high schools. We think we have some pathways that might make a difference: a boot camp, a alternate to elementary algebra that leads directly into math as a liberal art, a change in our prealgebra class. But it will take a long time, if ever, to have data and statistics that “prove” anything. The numbers are too small, the variables keep changing and are very difficult to control. This is especially true because our faculty are not all on board with the changes and have the freedom to teach as they see fit. That is to say: statistics based on a non representative sample taught by “true believers” is not predictive of what will happen in a wide scale adoption. These predictions, assertions, and conclusions are, as you say, lies.

  • By schremmer, June 26, 2015 @ 10:50 pm

    @Laura

    Your post is a rare pleasure.

    Yet, we all have a certain feeling when things “work”, or at least seem to, and the question then is what to do at that point. As you did not quite say, statistics are essentially out of the question. Now what?

    How about the old way of settling mathematical disagreement, namely submitting one’s case to peers?

    After all, at least the contents and the framework in which the contents are dealt with ought to be arguable. Similarly, there are any number of linguistic issues.

    The trouble, of course, is that the CC culture has come resolutely to avoid any “controversy”. Even the idea of a discussion in which at least to attempt to prove or disprove a case is taken to be uncouth.

  • By Jack Rotman, June 27, 2015 @ 3:51 pm

    I, too, really appreciated Laura’s comments

    I would attribute the ‘trouble’ a bit differently … a combination of “lack of patience” for leaders, combined with considerable confusion about “who is in charge” of academic matters. A suggestion that extended discussion is needed usually is impolitic, though I am currently walking that knife edge right now (within my college).

  • By schremmer, June 28, 2015 @ 4:09 pm

    Re. “A suggestion that extended discussion is needed usually is impolitic,”

    Indeed.

    What is disheartening, though, is that this is the case even within one’s department, among one’s colleagues.

Other Links to this Post

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.

WordPress Themes