Guided Pathways For Success: a Mathematicians View (part I)

The largest wave of external influence hitting colleges (especially community colleges) at this time is “Guided Pathways for Success” (GPS).  GPS is a package of talking points aimed at supporting degree completion at a higher (much higher) rate.

Here are the basic components of the GPS (see http://completecollege.org/the-game-changers/):

GPS is one of the ‘game changers’ being advocated by Complete College America (co-requisite remediation is another).    One problem with the implementation of GPS is that the work is very complicated, which usually results in a lack of sufficient information for almost all of the people working on the program.  We’re starting ours this year, at my college, and a handful of people have a complete view of our work … the rest of us only know about parts of one of the 6 efforts.  It’s also true that colleges doing GPS often attempt to take on another ‘game changer’.

One specific issue where people often lack knowledge is the initial student choice … which program?  meta-major? Non-degree (and non-certificate)?  For students receiving financial aid through any federal program, they can choose a specific program leading to either transfer or employable occupation.  The idea of the meta majors is that these would be a shared starting point for clusters of eligible programs, designed to provide occupational information and specific program selection in the first year.

As a mathematician, I see several advantages to GPS … and some areas of concern.  This initial post will summarize some advantages and explore an area of concern.

So, here are some things I like (from a math point of view):

  1. A strong emphasis on setting a goal (not much is worse than having students in class who have no idea what their goal is).
  2. An established sequence of courses for the program.  [My college, like many ‘CC’, have drifted far away from structure for courses.]
  3. A message that picking a major is a serious step, best done without a dart board but with sufficient information.
  4. Putting an academic purpose in front of advising (completion).

Clearly, one area of concern related to GPS is the fact that other efforts (co-requisite remediation, for example) are often put into a ‘bundle’ of efforts for a college.  That is not a GPS issue; my first concerns with GPS relative to mathematics exist around the ‘milestone’ course idea.

Historically, mathematics has been used (and abused) as the ultimate gate keeper.  Students are required to take certain mathematics courses to prove that they are okay for the program.  Yes, mathematics is important for many careers; however, a gatekeeper context creates negative expectations for students.

If a program or meta-major requires mathematics (which they mostly will), what course will be most commonly selected for a milestone course in the first year?  Mathematics has already been mentioned for this role on my campus.  If the program is STEM or STEM-related, this is a great idea; students in these programs will have a sequence of mathematics to complete … and will also be using mathematics in other classes during most semesters.

Outside of those programs, I do not want students to (generally) take mathematics in the first year.  Many of these students currently wait until their very last semester to take mathematics, and this is a bad thing … but not as bad as being told that you must take that math course in the first (or second) semester.  I am concerned about student attitudes towards learning, combined with the challenges of starting college, within the mathematics classroom.

So, when a student looks at their program choices from among the non-STEM options, they might see “Math125″ (or whatever) on their list of expected first semester courses.  The meta major option related to their program might not have a math course (because math is ‘aligned to majors’).  Likely result?  Students pick a meta major, in order to delay taking mathematics … or, we see reluctant (or resistant) students in math classes.

At least when students put off their math class until ‘late’, they come motivated to pass … perhaps not understanding what this means, but motivated.  First year CC students are likely to be reluctant and not especially motivated to pass mathematics.

Admittedly, this first concern discussed is not the best choice to begin the conversation.  The concern deals with the factors influencing student choice along with motivation.    I’ll try to do better next time!!

Join Dev Math Revival on Facebook:

Co-requisite Remediation: When it’s likely to work

A recent post dealt with the “CCA” (Complete College America) obsession with ‘corequisite remediation’.  In case you are not familiar with what the method involves, here is my synopsis:

Co-requisite remediation involves the student enrolling in both a credit course and a course that provides remediation, concurrently.  The method could be called ‘simultaneous’ remediation, since students are dealing with both the credit course and the remedial course concurrently.

The co-requisite models are a reaction to the sequential remediation done in the traditional models.  For mathematics, some colleges have from two to five remedial courses in front of the typical college course (college algebra, pre-calculus, or similar level).  The logic of exponential attrition points out the flaws in a long sequence (see http://www.devmathrevival.net/?p=1685 for a story on that).

The co-requisite models in use vary in the details, especially in terms of the degree of effort in remediation … some involve 1 credit (1 hour per week) in remedial work, others do more.  Some models involve adding this class time to the course by creating special sections that meet 5 or 6 hours per week instead of 4.

I do not have a basic disagreement with the idea of co-requisite remediation.  Our work in the New Life Project included these ideas from the start; we called it ‘just-in-time remediation'; this emphasis resulted in us not including any course before the Mathematical Literacy course.

The problem is the presumption that co-requisite remediation can serve all or almost all students.  For open-door institutions such as community colleges, we are entrusted with the goal of supporting upward mobility for people who might otherwise be blocked … including the portion needing remediation.  The issue is this:

For what levels of ‘remediation need’ is the co-requisite model appropriate?

No research exists on this question, nor am I aware of anybody working on it.  The CCA work, like “NCAT” (National Center for Academic Transformation) does not generally conduct research on their models.  NCAT actually did some, though the authors tended to be NCAT employees.  The CCA is taking anecdotal information about a new method and distributing it as ‘evidence’ that something works; I see that as a very dangerous tool, which we must resist.

However, there is no doubt that co-requisite remediation has the potential to be a very effective solution for some students in some situations.  Here is my attempt at defining the work space for the research question:  Which students benefit from co-requisite remediation?

Matching students to remediation model:

Matching students to remediation model

 

 

Here is the same information as text (in case you can’t read the image above):

Of prerequisite material ↓ Never learned it Misunderstands it Forgotten it
Small portion5% to 25% Co-requisite model Co-requisite model Co-requisite model
Medium portion30% to 60% Remedial course Remedial course Co-requisite model
Large portion65% to 100% Remedial course(s) Remedial Course(s) Remedial course

The 3 by 3 grid is the problem space; within each, I have placed my hypotheses about the best remediation model (with the goal of minimizing the number of remedial courses for each student).

As you probably know, advocates like CCA have been very effective … some states have adopted policies that force extensive use of co-requisite remediation “based on the data”.  Of course the data shows positive outcomes; that happens with almost all reasonably good ideas, just because there is a good chance of the right students being there, and because of the halo and placebo factors.

What we need is some direct research on whether co-requisite remediation works for each type of student (like the 9 types I describe above).  We need science to guide our work, not politics directing it.

 
Join Dev Math Revival on Facebook:

 

Data – Statistics – The CCA Mythologies (or Lies?)

Decision makers (and policy makers) are being heavily influenced by some groups with an agenda.  One of those groups (Complete College America, or CCA) states goals which most of us would support … higher rates of completion, a better educated workforce, etc.

A recent blog post by the CCA was entitled “The results are in. Corequisite remediation works.” [See http://completecollege.org/the-results-are-in-corequisite-remediation-works/]  CCA sponsored a convening in Minneapolis on corequisite remediation, where presenters shared data on their results.  Curiously, all of the presenters cited in the blog post had positive ‘results’ for their co-requisite model; clearly, this means that the idea works!  [Or, perhaps not.]

The textbook we use for our Quantitative Reasoning course includes a description of ‘results’ that were very (VERY) wrong.  A survey was done using phone number lists and country club membership lists, back in the 1930s when few people had telephones.  The survey showed a clear majority favoring one candidate, and they had over 2 million responses.  When the actual election was held, the other candidate one by a larger margin than was predicted in the survey.

The lesson from that story is “we need a representative sample” before using the data in any way.  The CCA results fail this standard badly; their ‘results’ are anecdotes.  These anecdotes do, in fact, suggest a connection between co-requisite approaches and better student outcomes.  College leaders, including my own provost, report that they “have seen the numbers’ showing that this method is promising.

Based on that exposure to questionable data, we are running an experiment this summer … students assessed at the basic arithmetic level are taking an intermediate algebra course, with extended class time and a supplemental instruction leader.  Currently, we still allow students to use an intermediate algebra course to meet a general education requirement, so this experiment has the potential to save those 11 students two (or more) semesters of mathematics.  I have not heard how this experiment is going.

Curiously, data on other ‘results’ would not be used to justify a change by decision makers and policy experts.  People have used data to support some extreme and wrong notions in the past, and are still doing so today.  The difference with the CCA is that they have some methodologies that automatically achieve validity, with the co-requisite remediation models at the top of this list.

Scientists and statisticians would never reach any conclusions based on one set of data.  We develop hypotheses, we build on prior research, we refine theories … all with the goal of better understanding the mechanisms involved in a situation.  Co-requisite remediation might very well work great — but not for all students, because very little ‘works’ for all students.  The question is which students, which situations, and what conditions will result in the benefits we seek.

The CCA claim that “corequisite remediation works” is a lie, communicated with a handful of numbers which support the claim, and this lie is then repeated & repeated again as a propaganda method to advance an agenda.  We need to be aware of this set of lies, and do our best to help decision makers and policy experts understand the scientific method to solving problems.

Instead of using absolute statements, which will not be true but are convenient, we need to remain true to the calling of helping ALL students succeed.  Some students’ needs can be met by concurrent remediation; we need to understand which students and find tools to match the treatment to them.  Our professionalism demands that we resist absolutes, point out lies, and engage in the challenging process of using statistics to improve what we do.

 Join Dev Math Revival on Facebook:

CBE … Competency Based Education in Collegiate Mathematics

Recently, I wrote about “Benny” in a post related to Individual Personalized Instruction (IPI).  We don’t hear about IPI like we once did, though we do hear about the online homework systems that implement an individual study plan or ‘pie’.  Instead of IPI, we are hearing about “CBE” — Competency Based Education (or Learning); take a look at this note on the US Department of Education site http://www.ed.gov/oii-news/competency-based-learning-or-personalized-learning

That particular piece is directed towards a K-12 audience; we are hearing very similar things for the college situations.  The Department (Education) sent accreditors a Dear Colleague Letter (GEN-14-23) this past December, as academia responds to the call to move away from “seat time” as the standard for documenting progress towards degrees and certification.  A former Provost at my college predicted that colleges will no longer issue grades by 2016, because we would be using CBE and portfolios (said this about 10 years ago); clearly, that has not happened … but we should not assume that the status quo is ‘safe’.

In my experience, most faculty have a strong opinion on the use of CBE … some favoring it, probably more opposing it.  As implemented at most institutions in mathematics, I think CBE is a disservice to faculty and students.  However, this is more about the learning objectives and assessments used, rather than CBE itself.

We need to understand that the world outside academia has real suspicions about the learning in our classes.  The doubts are based on the sometimes vague outcomes declared for our courses, and the perceptions are especially skewed about mathematics.  We tend to base grades on a combination of effort (attendance, completing homework, etc) along with tests written by classroom teachers (often perceived to be picky or focused on one type of problem).

One of the projects I did this past year was a study of pre-calculus courses at different institutions in my state, which lacks a controlling or governing body for colleges.  To understand the variation in courses, I wanted to look at the learning outcomes.  This effort did not last long … because most of the institutions treated learning outcomes as corporate ‘secret recipes’.  Other states do have transparency on learning outcomes — when all institutions are required to use the same ones.

This relates to the political and policy interest in CBE:

CBE will improve education by making outcomes explicit, and ensuring that assessment is aligned with those outcomes.

Sometimes, I think those outside of academia believe that we (inside) prefer to have ill-defined outcomes so that we can hide what we are doing.  We are facing pressure to change this, from a variety of sources.  Mathematics in the first two years can improve our reputation … while helping our students … if we respond in a positive manner to these pressures.

So, here is the basic problem:

Most mathematics courses are defined by the topics included, and learning outcomes focus on manipulating the objects within those topics.  The use of CBE tends to result in finely-grained assessments of those procedures.
Understanding, reasoning, and application of ideas are usually not included in the CBE implementation.

Compare these two learning outcomes (whether used in CBE or not):

  • Given an appropriate function with polynomial terms, the student will derive a formula for the inverse function.
  • Given an appropriate function with polynomial terms,  the student will explain how to find the inverse function, will find the inverse function, and will then verify that the inverse function meets the definition.

Showing competence on the first outcome deals with a low level learning process; the second rises to higher levels … and reflects the type of emphasis I am hearing from faculty across the country.

I do not see “CBE” as a problem.  The problem is our learning outcomes for mathematics courses, which are focused on behaviors of limited value in mathematics.  A related problem is that mathematics faculty need more professional development on assessment ideas, so that we can improve the quality of our assessments.  Without changing our learning outcomes, the use of a methodology like CBE will wrap a system around some bad stuff — which can make the result look better, without improving the value to students.

We need to answer the question “What does learning mathematics mean in THIS course?”  for every course we teach.  Assessments (whether CBE or not) follow from the learning outcomes we write as an answer to that question.

 Join Dev Math Revival on Facebook:

WordPress Themes