“DM-Live is Gone”

When the AMATYC “New Life Project” started about ten years ago, we needed a place to collaborate and share our work.  A wiki seemed like a good match to the needs, so the ‘dm-live’ wiki was begun.

Over time, the purpose for the wiki changed.  Eventually, ‘dm-live’ became a sharing site for information … and the vast majority of that information came from me.  I also run this blog site, which involves some of the same information.

The site that hosted the wiki (wikispaces.com) has closed its doors as a business decision.  This means that the dm-live site has been removed.  In some ways, that is fine since a significant portion of the information was out-of-date in minor or major ways.

If you are looking for information from the dm-live wiki, you will see quite a bit of it here on this blog.  Look at the pages (listed at the top) — “Instant Presentations” and “Algebraic Literacy”.  As I have time, I will include more of the dm-live information on those pages.

In the meantime, if you are looking for something specific and can’t find it — just send me a message (email or comment) and I will see what I can do to provide the information.

 Join Dev Math Revival on Facebook:

The Assessment Paradox … Do They Understand?

We often make the assumption that solving ‘more complicated’ problems shows a better understanding than solving ‘simpler’ problems.  This is an assumption … a logical one, with face validity.  I wonder if actual student learning refutes it.

My thoughts on this come from a class I’ve been teaching this summer, “Fast Track Algebra” for the first time.  Fast Track Algebra covers almost all beginning algebra along with all intermediate algebra; those separate courses I’ve taught for 45 years … this was my first time doing the ‘combo’ class.  In case you are wondering how we can manage it, the class meets 50% more — 6 hours per week in fall & spring, and 12 hours per week in the summer like my class did.

Our latest chapter test covered compound inequalities, absolute value equations, and absolute value inequalities.  As for most of the content, none of this is review of what students ‘know’ — any knowledge they had on these concepts is partially or fully faulty, so class time is focused on correcting deeper understanding of concepts and procedures.

The class test, in the case of absolute value inequalities, presented 3 levels of problems:

  1. simplest, just need to isolate absolute value and easiest solution … like |x| – 2 < 5
  2. typical, absolute value already isolated with a binomial expression … like |3w + 4| >8
  3. complex, with a need to isolate absolute value and binomial expression … like 2|2k -1| + 6 < 10

The surprise was that most students did better on the ‘complex’ problems than they did the simplest problems.  On the simplest problems, they would only ‘do positive’ while they would do the correct process on the complex problems (both positive and negative).  This was a little surprising to me.

If a student does not do the simpler problems correctly, it is difficult to accept a judgment that they ‘understand’ a concept — even if they got ‘more complicated’ problems correct.  This paradox has been occupying my thoughts, though I have seen some evidence of its existence previously.

So, here is what I think is happening.  As you know, ‘learning’ is a process of connecting the stimulus (such as a problem) with the stored information about what to do.  Most of the homework, and the latest work in class, deals with the ‘complex’ problem types.  The paradox seems to be caused by responding to the surface features of the problem and retrieving a process memorized in spite of a weak conceptual basis.  In the case of absolute value inequalities, they ‘memorized’ the correct process for complicated problems but failed to connect the concept to the simplest problems because they ‘looked different’.

If valid, this assessment paradox raises fundamental questions about assessment across the entire curriculum.  As you know, the standard complaint in course ‘n+1’ is that students can not apply the content of course ‘n’.  Within course ‘n’, the typical response to this complaint is to emphasize ‘applying’ content to more complicated problems.  Perhaps students can perform the correct procedure on complicated problems without understanding and without being able to apply procedures in simple settings.

I see this paradox in other parts of the algebra curriculum.  Students routinely simplify rational expressions with trinomials correctly, but fail miserably when presented with binomials (or even monomials).

Some of us avoid this paradox by emphasizing applications — known as ‘context’, and focusing on representations of problems more than procedural fluency.  With that contextual focus, we will seldom see the assessment paradox.  The challenge on the STEM path is that we need BOTH context with representation AND procedural fluency.

I’m sure most faculty have been aware of this ‘paradox’, and that this post does not have novel ideas for many of us.  I wonder, though, whether we continue to believe that students ‘understand’ because they correctly solved problems with more complexity.

 Join Dev Math Revival on Facebook:

Placement Tests, HS GPA, and Multiple Measures … “Just the Facts”

We know that repeated statements are often treated as proven statements, even if the original version of the statement was not accurate.  In other words, if you want people to accept your point of view … don’t worry about whether it is accurate,  just make sure that your statement is repeated by lots of people over a period of time.  Like “HS GPA is a better predictor than placement tests”.

The original message seems to have been based upon a CCRC report (“High Stakes Placement … “, 2012 by Clayton). https://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html  The conclusion about tests versus HS GPA is this:

First, focusing on the first or second columns, which examine the predictive value of placement scores alone for slightly different samples, one can see that exam scores are much better predictors of math outcomes than English outcomes. The overall proportion of variation explained is 13 percent for a continuous measure of math grades, compared with only 2 percent for a continuous measure of English grades. This is consistent with the findings from previous research.

The date being referenced is this:

 

 

 

 

 

 

 

 

 

 

 

Not only is the ‘HS GPA is better’ not accurate for the original research (for mathematics), people never mention a fundamental issue:

 

The data for the 2012 study came from ONE “large urban community college system” (LUCCS)

Now, I don’t doubt the basic premise that including more variables can improve a decision (such as a test plus HS GPA).  The problem is that the message “HS GPA is better” has been repeated so often, by so many people, that decision makers accept it as truth.  The truthfullness depends a great deal on the decision being made — placement in English, or placement in Mathematics.  The situation looks pretty clear (in the LUCCS data) for English, where using the HS GPA only seems a better thing.  In Mathematics … not so much!

Researchers have developed models for placement in mathematics based on HS transcript data, though I’ve never seen a proven model using just HS GPA.  The variables connected to these research models involve:

  • Specific mathematics courses completed in high school (especially grades 11 and 12)
  • Specific grades received in those mathematics courses
  • As a minor factor, the overall HS GPA

A good prototype of this scheme is the California “MMAP” work; see http://rpgroup.org/Portals/0/Documents/Projects/MultipleMeasures/DecisionRulesandAnalysisCode/Statewide-Decision-Rules-5_18_16_1.pdf  .  Rather than the ‘drive-off-the-cliff’ approach (North Carolina, Florida, etc), this is a scientific approach to a complicated problem.  Few of our colleges, and few states, are willing to invest the resources necessary for this truth-in-multiple measures approach.  [The fact that California can do this seems to have been a consequence of decisions about higher education in that state 50 and 60 years ago.  We probably won’t see that again.]

Some additional truth about HS GPA:

High School GPA transmits inequity

Here is some data from the US Department of Education transcript study (2009):

 

 

 

 

 

 

 

 

 

 

 

The issue is not that HS GPA transmits inequity while placement tests do not.  In the case of SAT Math (and ACT Math) the gaps are known to exist.  The issue is that HS GPA transmits the inequity without regard to the student’s abilities in a subject domain.

The race/ethnicity ‘gaps’ for HS GPA are just one way to establish that it transmits inequity.  Economic and geographical inequities are also apparent in the HS GPA data.  At least the test developers strive to minimize their inequity; items which show a significant differential impact are removed from the tests.

Placement tests are less harmful to students than HS GPA.

The truth about multiple measures is that it will only help students when implemented in a scientific manner in the location or region involved.  HS GPA by itself will harm students.

 Join Dev Math Revival on Facebook:

How to Impact Student Success

College leaders (presidents, trustees, chancellors, etc) have discovered “student success” as an issue, and they promptly implement systemic changes which impede student success.

In some ways, their errors are understandable.  We’ve got plenty of data which shows …

  • Traditional remediation in mathematics most often functions as a barrier to students
  • Students who complete college math in their first year are more likely to complete their program/degree
  • Placement by single-measure tests tends to underplace 20% to 30% of the students

Leaders have also accepted the surface logic of “alignment” (At the Altar of Alignment  ), just like some folks accept the logic of ‘trickle-down-economics’.  Alignment takes many forms … from aligning K-12 and college expectations to selecting a math course for a student’s program.  Little data exists to show that alignment improves student success; like tax cuts, alignment is difficult to argue against — even though we should.

When I talk about student success, I am referring to the important measures of student success — learning, preparation, and a liberating education.  Passing my math course is not a measure of student success … being able to deal with mathematics in other situations IS.  Curiously, I asked by college president about measuring student learning as a component of student success; the response was that we should drop course grades and move to a portfolio.

So, here is the type of thing I mean by student success.

In a conversation with a small group of science faculty, they shared their frustration with student’s inability to apply math — algebra in particular — to scientific contexts.  A low level example was a simple temperature conversion:  T[sub C] = (5/9)(T[sub F] – 32), given temperature of 40 degrees C, convert to degrees F.

Many students treat this as a calculation problem (5/9)(40 – 32), instead of algebraic.  It seems to make no difference if subscripts are used or the letters C and F instead.

Student success is being able to reason (algebraically) in this case to get the job done.

In this case, we have ‘alignment’. The math course students took before the specific science course included replacements for both independent and dependent variables.  Alignment is a very (VERY) weak estimate of preparation for student success.

My goal of student success is not especially lofty.  In a nutshell, this is it:

Given a situation involving application of concepts and skills easily within the mathematical reach of the students, they will formulate a reasonable solution method and execute this solution with reasonable precision.

This goal is quite a bit above the useless definition of student success seen by college leaders: course completion one-at-a-time.  Student success means that my colleagues in other disciplines would be pleasantly surprised by how well our students apply mathematical concepts and relationships which arise in that discipline.  Those faculty would not need to dilute the scientific rigor of their course (in whatever discipline) just because the students we send to them lack quantitative understanding.

We live in an era of ‘completion obsession’.  It’s not that program completion is bad … completion is a great thing; the best day of my year is getting to see some of my students walk across the stage to get their degree.  The problem is that the obsession with completion devalues the education we are supposed to be providing to our students.  In the completion fixation, we watch students on the marathon course to make sure that they pass each critical point — without noticing that many students are running without understanding strategy or skill.  It’s like perseverance is the only trait we value.

Our job is to keep education in mathematics.  Student success means that we’ve made a difference in how our students are able to deal with quantitative situations; mathematics is an enabler of multiple career options for all students, not a subject to be gotten-done-with.

 
Join Dev Math Revival on Facebook:

 

WordPress Themes