At the Altar of Alignment

The answer to all questions is “42” (see Hitchhikers Guide).  The solution to all problems is “alignment”.  Academic leaders, government officers, and policy makers are using the word “alignment” in attempts to address many perceived failures in academia.  Alignment is not even a necessary property, and is certainly not sufficient, for an academic system to be successful.

At the micro-level, people tell us to align course outcomes.  If course A is a prerequisite to course B, then the outcomes should be “aligned”. In cases where our goals are strictly operational (just the doing, not the understanding nor the reasoning), we can align courses.  I’d suggest that this is a very weak methodology for a mathematics curriculum, since aligning outcomes directs our attention to the fine levels of granularity as opposed to the basic story line of a course.  A stronger design is to focus on mathematical abilities being developed over time … both within a course as well as across courses.  Alignment is often counter-productive in mathematics.

At the mid-level, we are told to align the mathematics required with the needs of the student’s program.  In other words, if the primary quantitative need of an occupation is the consumption of statistics, then the mathematics required for the program should be a statistics course.  As attractive as this alignment might be … the practice is based on two unfounded assumptions — (1) that a student KNOWS what they plan to become when they begin college, and (2) that this plan is relatively stable over time for each student.  Unless we plan to return society to pre-global, pre-fluid periods for occupations, alignment is a dis-service to many students.   Instead of alignment, we’d be better served by offering a good mixture of valuable mathematics, not specialized.

At the macro-level, we try to align K-12 mathematics with college mathematics (or, vice versa).  The unfounded presumption here is that K-12 mathematics exists primarily to prepare students for college mathematics. And, there is an assumption that this ‘alignment’ (whatever it means in this context) will make a significant difference.  Like aligning course outcomes, aligning levels of education tends to push our attention down to small details —  in other words, alignment is based on focusing on insignificant details while ignoring larger concerns.  For this level alignment, think about what would be more powerful:

  • Students have mastered skills A1 to A5, B1 to B7, C1 to C4, and D1 to D8 which logically can be followed by A6 to A9, B8 to B12, C5 to C10, and D9 to D11.OR
  • Students develop learning and academic skills (including mathematics) to develop reasonable proficiency as well as an ability to learn in a variety of situations using different tools.

We spend time at the altar of alignment, working on ‘solutions’ which have little chance of helping students.  Education is much more than the sum of a finite series of detailed objectives … education is much more than learning just the mathematics needed for an expected occupation … education is more than a series of steps which present a surface logic but lack power in a person’s life.

Our time would be better spent in seeking a vision and some wisdom on educating students, educating them for capacities and success.  The checklist success of alignment is worthless compared to the benefits of education done well.

 Join Dev Math Revival on Facebook:

“Envisioning our Future” launched … Mathematics in the First Two Years

I am developing a new page on this blog … ‘envisioning the future’, devoted to where we are (or should be) going with mathematics in the first two years.  See Envisioning Our Future

The rationale for putting a focus on this ‘envisioning’ is simple — too much of our effort is currently invested in either defending our traditional curriculum OR in responding to demands to change in specific ways.  We need to focus on where we want to go in the long term, instead of coping with demands resulting in short term changes.

Progress means that we are closer to our goals on this path called ‘college mathematics’.  Our strategies should place this progress at the center of our work whenever possible.

I hope that you will find some useful ideas, and perhaps even some inspiration in this content.

 Join Dev Math Revival on Facebook:

The Majority of Community Colleges Use Multiple Measures, or “Using Statistics to Say Nothing Impressively”

My college has a team working on implementing multiple measures placement for our credit math courses.  We are early in a process, so we are primarily collecting information.  One of my colleagues found an organization with both internal resources and links to external resources.  One of those external resources led me to a “CAPR” report (more on the acronym in a moment) with a good example of bad statistics.

So, here’s the question:

What proportion of American community colleges (defined as associate degree granting institutions) use multiple measures to place students in mathematics?

A place with an ‘answer’ to this good question is the Center for the Analysis of Postsecondary Readiness (CAPR), with a report “Early Findings from a National
Survey of Developmental Education Practices” (see https://postsecondaryreadiness.org/wp-content/uploads/2018/02/early-findings-national-survey-developmental-education.pdf ).  Using data from two national surveys, this report shows the graph below:

 

 

 

 

 

 

 

 

 

 

 

 

So, what is the probability of a given community college using multiple measures placement?  It’s not 57%, that’s for sure.  In general, this work is being done by states.  If your community college is in California, the probability of using multiple measures is pretty much 100%.  On the other hand, if your community college is in Michigan, the probability is somewhere around 5% to 10%.  Is the probability rising over time?

Here is what the report says to ‘interpret’ the graph:

This argument [other indicators of college readiness provide a more accurate measure ofcollege success] has gained traction in recent years among public two-year institutions.  In a 2011 survey, all public two-year institutions reported using a standardized mathematics test to place students into college-level math courses; as shown in Figure 1, only 27 percent reported using at least one other criterion, such as high school grade point average or other high school outcomes. Just five years later, 57 percent of public two-year institutions reported using multiple measures for math placement.

Clearly, the implication is that community colleges are choosing to join this ‘movement’.  Of course, some community colleges are making that choice (as mine is doing).  However, a large portion of that 57% in 2016 reflects states with mandated multiple measures (California, North Carolina, Georgia, Florida, probably others).  The data has no particular meaning in any location or college in the country.  A movement could be measured in states without a mandate.

Essentially, the authors are using statistics to say absolutely nothing in an impressive manner.  Multiple measures is clearly a ‘good thing’ because more colleges are doing it, so the logic goes.  Unfortunately, the data does not mean anything like that — multiple measures are most commonly imposed by non-experts who have the authority to mandate a policy.  [By ‘non-expert’, I mean people whose profession does not involve the actual work of getting students correctly placed in mathematics … politicians, chancellors, etc.]

 Join Dev Math Revival on Facebook:

Cooked Carrots and College Algebra

Perhaps your state or college is using high school grade point average (HS GPA) as a key placement tool in mathematics, in the style of North Carolina.  The rationale for this approach is studies showing a higher correlation between HS GPA and success in college mathematics, compared to standardized tests (SAT, Accuplacer, etc).  Is this a reasonable methodology?

Some of us are doing true multiple measures, where HS GPA is included along with other data (such as test scores).  However, North Carolina is using HS GPA as the primary determinant of college placement; see http://www.nccommunitycolleges.edu/sites/default/files/academic-programs/crpm/attachments/section26_16aug16_multiple_measures_of_placement.pdf .

This HS GPA movement reminds me of a specific class day in one of my classes — a graduate level research methods class.  On this day, the professor presented this scenario:

Data shows that students who liked cooked carrots are much more likely to succeed in college.  Should a preference for cooked carrots be included as a factor in college admissions?

The goal, of course, was to consider two basic statistical ideas.  First, that correlation does not equal explanation.  Second, most correlations have a number of confounding variables.  In the case of cooked carrots, the obvious confounding variable is money — families eating cooked carrots, as a rule, have more money than those who don’t.  Money (aka ‘social economic status’, or SES) is a confounding variable in much of our work.  We could even conjecture that liking cooked carrots is associated with a stable family structure as well as non-impoverished neighborhoods, which means that there will be a tendency for cooked-carrot-liking students to have attended better schools.  Of course, this whole scenario is bound up in the cultural context of that era (the 1970s in the USA).

In a similar way, proponents point out the high correlation between HS GPA and success in college mathematics.  That correlation (often 0.4 or 0.5) is higher than our test score correlations (often 0.2 or 0.3), which is often ‘proof enough’ for academic leaders who do not apply statistical reasoning to the problem.  Here is the issue:

If I am going to use a measure to sort students, I better have a sound rationale for this sorting.

That rationale is unlikely to ever exist for HS GPA … no explanation is provided beyond the statistical artifact of ‘correlation’.  Student A comes from a high-performing school and has a 2.5 GPA; do they need remediation?  Student B comes from a struggling school and has a 3.2 GPA; are they college ready?  Within a given school, which groups of students are likely to have low GPA numbers?  (Hint: HS GPA is not race-neutral.)

If you are curious, there is an interesting bit of research on HS GPA issues done by Educational Testing Service (ETS) in 2009; see https://www.ets.org/Media/Research/pdf/RR-13-09.pdf .  One of the findings:  HS GPA is “contaminated” by SES at the student level (pg 14).   Just like cooked carrots.

So, if you are okay with ‘cooked carrots’ being a sorting tool for college algebra, go ahead with HS GPA as a placement tool.

Join Dev Math Revival on Facebook:

WordPress Themes