The Rigor Unicorn

How would you define (or describe) “rigor” in college mathematics classes?  Can you define or describe “rigor” without using the words “difficult” or “challenging”?  I will share a recent definition, and counter with my own definition.

Before anything else, however, we need to recognize the lack of equivalence between rigor and difficult (and between rigor and challenging).  The basic problem with those concepts (difficult, challenging) is that they are relational — a specific thing is difficult or challenging based on how that thing interacts with a person or a group of people.  Difficult and challenging are relative concepts, not a property of the object being described.  I found the calculus of trig functions to be difficult, not because there is any rigor involved — it was difficult for me because of the heavy role of memorization of formulae in that work in the particular class with that particular professor.  Other learners find this same work easy.

A recent definition of “rigor” comes from the Dana Center (DC):

We conclude that rigor in mathematics is a set of skills that centers on the communication and use of mathematical language. Specifically, students must be able to communicate their ideas and reasoning with clarity and precision by using the appropriate mathematical symbols and terminology.

See http://dcmathpathways.org/resources/what-is-rigor-in-mathematics-really http://dcmathpathways.org/resources/what-is-rigor-in-mathematics-really

This definition avoids both ‘banned’ words (difficult, challenging), and that is good.  This definition focuses on communication of ideas and reasoning, and that seems good.  When my department discussed this definition recently at a meeting, the question was naturally raised:

Does rigor exist in the absence of communication?

The problem I have with the “DC” definition of rigor is that it suggests that rigor only exists when there is communication taking place.  In other words, rigor does not describe the learning taking place … rigor describes the communication about that learning.  Obviously, communication about mathematics is critical to all levels of learning — whether there is lots of rigor or none.  I don’t think we can equate rigor with communication.  Such an equivalence tempts us to equate rigor with how we measure rigor.

As I think about rigor, I always return to concepts relating to the strength of the learning.  I’d rather have an equivalence between rigor and strength, as that makes conceptual sense.  The rigor exists even in the absence of communication.  Rigor describes the concepts and understanding being developed within the learner, not the object being learned.

My definition:

Rigor in mathematics refers to the accuracy and strength of learning, and specifically to the completeness of the cognitive schema within the learner including appropriate connections between related or dependent ideas.

In some ways, this definition of rigor suggests that “rigor” and “like an expert” might be equivalent concepts.  I am suggesting that rigor describes the quality of learning compared to complete and perfect learning.  Rigor is not an on-off switch, rather it is a continuum of striving towards the state of being an expert about a set of concepts.

One of the reasons I approach the definition ‘differently’ is that rigor should exist in appropriate ways in every math course — from remedial through basic college through calculus an up to graduate level and research work.  Rigor is not a destination, where we can declare “this student has rigor”.  Rigor is a quality comparison between the unseen learning and the state of an expert in that particular set of content.  When we teach basic linear functions, I seek to develop rigor in which students have qualities of learning like an expert would have, concerning connections and reasoning.  When we teach calculus of trig functions, I hope we seek to develop qualities of learning similar to an expert.

I believe the development of rigor is a fundamental ingredient to making mathematics innately easier for the learner.  When the knowledge is more complete (like an expert) the use of that knowledge becomes more efficient … and the learning of further mathematics requires less energy (just like an expert).  Rigor is the core ingredient in the recipe to make mathematicians from the students who arrive in our classrooms.

Rigor does not start in college algebra, nor in calculus.  Rigor is not the same as ‘difficult’.  Rigor can exist when there is no communication about the learning.  Rigor is the fundamental goal of all learning, at all levels … rigor is a way to measure the quality of learning.  Rigor is the goal of developmental mathematics … the goal of quantitative reasoning … the goal of pre-calculus … the goal of calculus … the goal of statistics.

The “rigor unicorn” is within each of us, and within each of our students.

 

Can We Even Say “Developmental” Anymore?

Some of us say “remedial mathematics”, others say “developmental mathematics”.  Do you feel like you can’t say either one now?

You may have heard that “NADE” changed its name from National Association for Developmental Education to “NOSS” … National Organization for Student Success.  You can understand why this was done, with the recent attacks on all things developmental.  Being understandable, however, does not make this type of thing “right”.  As far as NADE/NOSS is concerned, I think the name change will make it difficult for the organization to articulate a clear identity … since ‘student success’ is an over-arching concept, suggesting that the group will focus on the universe of higher education.  Who will speak on the behalf of students who need advocates for over-coming weak preparation?

Clearly, this avoidance of the word developmental is a systemic problem — a symptom of massive denial — a denial being offered as a “solution”.  Obviously, remedial education (aka developmental education) has had significant problems in the past with our focus on too-many courses, and not providing enough benefit.  However, multiple measures and co-requisite courses will also be a failure in coping with the gaps in preparation that our students bring to us.  We could debate whether a high-school graduate SHOULD need coursework in college before being able to succeed in mathematics; ‘should’ is a very weak design principle for an educational system.  We must succeed in the real world.  Why should we penalize students by pretending that we have some magic that will somehow enable students with an SAT Math of 420 to succeed in a college curriculum with only added support to their ‘college math’ course?

If leaders don’t want to use labels like ‘developmental’, I encourage them to use the new replacement phrase “black magic”.  It would take some serious black magic to help students succeed in their college program with serious deficiencies in mathematics without doing some direct (prolonged) work on the problem.  In some cases, what is being done to avoid developmental math courses comes across as smoke & mirrors.  People implement grand plans, which (according to them) produce great results for all kinds of students.  Sign them up for “America’s Got Talent!” 🙂

I think we are better off using an accurate word like “remedial” and then have an honest discussion about identifying students who need one or two courses in order to be ready for success in their college program.  We need to think more about the whole college program, and less about passing a particular ‘college’ math course.  The opportunity for second chances and upward mobility are at the center of a stable democracy.

Language is important.  Not using a word (like “developmental”) does not solve the set of problems we face.  There is no magic in education; progress is made by applying deep understanding and critical thinking across a broad community committed to helping ALL students achieve their dreams.

 

Does the HS GPA Mean Anything?

In the context of succeeding in college-level mathematics, does the HS GPA have any meaning?  Specifically, does a high school grade point average over some arbitrary value (such as 3.2) indicate that a given student can succeed in college algebra or quantitative reasoning with appropriate support?

Statistically, the HS GPA should not exist.  The reason is the the original measures (semester grades on a scale from 0 to 4 with 0.5 increments) is an ordinal measure; higher values are greater than smaller values.  A mean of a measure depends upon a presumption of “interval measures” — the difference between 0 and 1 is the same as the difference between 3 and 4.  The existence of GPA (whether HS or college) is based on convenient ignorance of statistics.

Given the long-standing existence of the HS GPA, one can not hope for leaders to suddenly recognize the basic error.  Therefore, let’s assume that the HS GPA is a statistically valid measure of SOMETHING.  What is that something?  Is there a connection between that something and readiness for college mathematics?

The structure of the data used for the HS GPA varies considerably by region and state.  In some areas, the HS GPA is the mean of 48 values … 6 courses at 2 semesters per year for 4 years.  If the school schedule allows for 7 classes, then there are 56 values; that type of variation is probably not very significant for our discussion.  The meaning of the HS GPA is more impacted by the nature of the 6 (or 7) classes each semester.  How many of these courses are mathematical in nature?  In most situations, at the current time, we might see 8 of the 48 grades coming from a mathematics course with another 4 to 8 coming from a science course.  Although most students take “algebra II” in high school, a smaller portion take a mathematically intense science course (such as physics).

In other words, we have a measure which has approximately a 20% basis in mathematics alone.  The other 80% represent “english”, social science, foreign language, technology, and various electives.  Would we expect this “20% weighting” to produce useful connections between HS GPA and readiness for college mathematics?  If these connections exist, we should see meaningful relationships between HS GPA and accepted measures of readiness.

So, I have spent some time looking at our local data.  We have only been collecting HS GPA data for a short time (less than one year), and this data can be compared to other measures.  Here are the correlation coefficients for the sample (n>600 for all combinations):

  • HS GPA with SAT Math: r = 0.377
  • HS GPA with Accuplacer College Level Math: r = 0.164
  • HS GPA with Accuplacer Algebra:   r = 0.338

Compare this with the correlations of the math measures:

  • SAT Math with Accuplacer College Level Math: r = 0.560
  • SAT Math with Accuplacer Algebra: r = 0.627
  • Accuplacer College Level Math with Accuplacer Algebra: r = 0.526

Of course, correlation coefficients are crude measures of association.  In some cases, the measures can have a useful association.  Here is a scatterplot of SAT Math by HS GPA:

 

 

 

 

 

 

 

 

 

The horizontal lines represent our cut scores for college level mathematics (550 for college algebra, 520 for quantitative reasoning). As you can see from this graph, the HS GPA is a very poor predictor of SAT Math.  We have, of course, examined the validity of the traditional measures of readiness for our college math courses.  The overall ranking, starting with the most valid, is:

  1. Accuplacer Algebra
  2. Accuplacer College Level Math
  3. SAT Math

The order of the first two differs whether the context is college algebra or quantitative reasoning.  In all cases, the measures show consistent validity to promote student success.

Here is a display of related data, this time relative to ACT Math and HS GPA.  The curves represent the probability of passing college algebra for scores on SAT Math.

 

 

 

 

 

 

 

 

 

[Source:  http://www.act.org/content/act/en/research/act-scores-and-high-school-gpa-for-predicting-success-at-community-colleges.html ]

For math, this graph is saying that basing a student’s chance of success just on the HS GPA is a very risky proposition.  Even at the extreme (a 4.0 HS GPA), the probability of passing college algebra ranges from about 20% to about 80%.  The ACT Math score, by itself, is a better predictor. The data suggests, in fact, that the use of the HS GPA should be limited to predicting who is not going to pass college algebra in spite of their ACT Math score … ACT Math 25 with HS GPA below 3.0 means “needs more support”.

So, back to the basic question: What does the HS GPA mean? Well, if one ignores the statistical violation, the HS GPA has considerable meaning — just not for mathematics.  The HS GPA has long been used as the primary predictor of “first year success in college” (often measured by 1st year GPA, another mistake).  Clearly, there is an element of “academic maturity or lack thereof” in the HS GPA measure.  The GPA below 3.0 seems to indicate insufficient academic maturity to succeed in a traditional college algebra course (see the graph above).

We know that mathematics forms a minor portion of the HS GPA for most students.  Although a small portion of students might have 50% of their HS GPA based on mathematically intense courses, the mode is probably closer to 20%.  Therefore, it is not surprising that the HS GPA is not a strong indicator of readiness for a given college mathematics course.

My college has recently implemented a policy to allow students with a HS GPA 2.6 or higher to enroll in our quantitative reasoning course, regardless of any math measures.  The first semester of data indicates that there may be a problem with this … about a third of these students passed the course, compared to the overall pass rate of about 75%.

I suggest that the meaning of the HS GPA is that the measure can identify students at risk, who perhaps should not be placed in college level math courses even if their test scores qualify them. In some cases, “co-requisite remediation” might be appropriate; in others, stand-alone developmental mathematics courses are more appropriate.  My conjecture is that this scheme would support student success:

  • Qualifying test score with HS GPA > 3.00, place into college mathematics
  • Qualifying test score with 2.6≤HS GPA<3.0, place into co-requisite if available, developmental if not
  • Qualifying test score with HS GPA < 2.6, place into developmental math

This, of course, is not what the “policy influencers” want to hear (ie, complete college america and related groups).  They suggest that we can solve a problem by both ignoring prior learning of mathematics and applying bad statistics.  Our responsibility, as professionals, is to articulate a clear assessment based on evidence to support the success of our students in their college program.

 

Re – Writing the Job Description for a Math Instructor

[A guest blog post from Larry Stone]

In the two-year colleges of the near future, what will a math instructor do to earn his/her pay? What, exactly, will his/her job involve?

The model I see emerging is: setting some software to deliver a standardized list of prefab learning items (perhaps checking a few boxes to add or delete some items),
scheduling a few automated assessments, then letting each student follow an individualized path at an individualized pace (taking individualized assessments).
Occasionally, the “instructor” should check in to view the dashboard, just to make sure everyone’s been logging enough hours. Intervening in the actual learning process is only
necessary when the software seems to be struggling in guiding some student towards the correct responses – but we may expect this to become less necessary as the software
continues to improve year after year.

So, what is the math instructor of the near future? In essence, a software jockey with some tutoring ability.

Now consider: how much skill and training does that require? Presently, we hire experts with master’s degrees to teach almost all of the classes: teaching is a profession.
Instead, one can picture a small group of programmers and content developers at the center, with lesser trained software jockeys (to be nice, let’s call them “student support
specialists”) distributed among the schools. The huge potential savings to higher education, where the cost of high-credentialed labor is the largest expense, makes it easy
to see why we are inexorably moving towards this model: it’s individualized instruction for all, which makes it sound good, but it’s cheap, which is apparently the ultimate good.

But will we lose something that we could never get back?

When I proudly entered the profession in 1999, it was still a mostly traditional environment. It felt like a perfect fit for me, because it gave me the opportunity to
exercise two of my professional strengths: I love to write-write-write, crafting and re-crafting materials to make them fit together and flow ever more naturally, and I love to
put on a show, sharing my enthusiasm for math and engineering and the great fun that comes from understanding how the world works. Instructors at that time were expected
to be heavily involved in developing their learning objectives, lectures (not a naughty word if done well), exercises, projects, and the like; and as for “putting on a good show,”
that was the main reason for teaching at a two-year college instead of a four-year college: good teaching, not voluminous research, was what mattered.

I now see how fortunate was my timing: I’ve had a great run for twenty years. What has surprised me the most is that, having a human mind interacting with a human
world, I still continue to have sudden insights about how to make things even better! It keeps the job fresh, fun, interesting, and in tune with an evolving world. Best of all, I am
free to immediately incorporate my ideas into my curriculum, assessments and all, without having to worry about how it messes up some software’s learning item
connectivity database. The master plan is entirely in my own head, and that I can easily adjust. I feel like a craftsman at work.

I even dare say, I’d be pretty good instructional software if I could be downloaded — but we’re not really there yet with the technology, are we? Are we even close? Perhaps,
before we ditch the master craftsman model in order to adopt the factory automation model of education – before we lose the generation that understands what teaching as
craft is all about, and find ourselves dissatisfied with the skin-deep, stimulus-response McEducations that will result — we should ask ourselves: how easy will it be fix THAT
situation?

Instead of sliding down that road, we should refocus the original question. What SHOULD a math instructor’s job involve, in a perfect world? I’ll offer just four ideas:

  1. Writing good learning objectives and lessons, hand-crafting exercises and assessments, and using classroom experience (and other experiences, such as
    from teacher conferences, etc.) to continually improve these materials over the course of one’s career. Besides having the basic drive to produce quality work, the
    instructor should delight in finding new ways to communicate ideas that seem to open up possibilities for ever deeper learning and insight.
  2. Close, daily grading of student work, in order to hand-write custom feedback and advice for each student, while also learning which areas may need to be re
    addressed in the main class (which can be amazingly different from class to class and term to term). It takes time, but this, in my experience, is by far the best way
    to take the true pulse of your classes. Certainly, it provides a richer feel than turning to summary statistics on a computer.
  3. Using one’s own professional and life experiences to show how learning content relates to the “world out there.” Nobody measures this, but as a student in college
    I always felt it was truly worth something to be coming into contact with so many different content experts, each applying his/her own unique background and style
    to the subjects at hand. You learn things that aren’t in the book/aren’t in the software. Believing one has a unique and valuable perspective to share that may
    inspire some students to go further is part of what motivates one to become a teacher.
  4. Getting to know each student as a person. Besides putting students in a receptive mood, it helps one know how to be personally supporting and encouraging, in
    ways indescribably more effective than pop-up messages from the software saying “your hard work is paying off!” for the sixth time this week.

 

Computers are great for taking over tedious, repetitive calculations, but this is not what math education involves. If you view any of the above tasks as potentially tedious
then, historically speaking at least, you’re in the wrong profession. Meanwhile, show me a computer that loves teaching this stuff and maybe it can learn to take over my job —
but the technology isn’t there yet, and may never be.

by Larry Stone; February 26, 2019

WordPress Themes