Category: Assessment in Math Classrooms

The Assessment Paradox … Do They Understand?

We often make the assumption that solving ‘more complicated’ problems shows a better understanding than solving ‘simpler’ problems.  This is an assumption … a logical one, with face validity.  I wonder if actual student learning refutes it.

My thoughts on this come from a class I’ve been teaching this summer, “Fast Track Algebra” for the first time.  Fast Track Algebra covers almost all beginning algebra along with all intermediate algebra; those separate courses I’ve taught for 45 years … this was my first time doing the ‘combo’ class.  In case you are wondering how we can manage it, the class meets 50% more — 6 hours per week in fall & spring, and 12 hours per week in the summer like my class did.

Our latest chapter test covered compound inequalities, absolute value equations, and absolute value inequalities.  As for most of the content, none of this is review of what students ‘know’ — any knowledge they had on these concepts is partially or fully faulty, so class time is focused on correcting deeper understanding of concepts and procedures.

The class test, in the case of absolute value inequalities, presented 3 levels of problems:

  1. simplest, just need to isolate absolute value and easiest solution … like |x| – 2 < 5
  2. typical, absolute value already isolated with a binomial expression … like |3w + 4| >8
  3. complex, with a need to isolate absolute value and binomial expression … like 2|2k -1| + 6 < 10

The surprise was that most students did better on the ‘complex’ problems than they did the simplest problems.  On the simplest problems, they would only ‘do positive’ while they would do the correct process on the complex problems (both positive and negative).  This was a little surprising to me.

If a student does not do the simpler problems correctly, it is difficult to accept a judgment that they ‘understand’ a concept — even if they got ‘more complicated’ problems correct.  This paradox has been occupying my thoughts, though I have seen some evidence of its existence previously.

So, here is what I think is happening.  As you know, ‘learning’ is a process of connecting the stimulus (such as a problem) with the stored information about what to do.  Most of the homework, and the latest work in class, deals with the ‘complex’ problem types.  The paradox seems to be caused by responding to the surface features of the problem and retrieving a process memorized in spite of a weak conceptual basis.  In the case of absolute value inequalities, they ‘memorized’ the correct process for complicated problems but failed to connect the concept to the simplest problems because they ‘looked different’.

If valid, this assessment paradox raises fundamental questions about assessment across the entire curriculum.  As you know, the standard complaint in course ‘n+1’ is that students can not apply the content of course ‘n’.  Within course ‘n’, the typical response to this complaint is to emphasize ‘applying’ content to more complicated problems.  Perhaps students can perform the correct procedure on complicated problems without understanding and without being able to apply procedures in simple settings.

I see this paradox in other parts of the algebra curriculum.  Students routinely simplify rational expressions with trinomials correctly, but fail miserably when presented with binomials (or even monomials).

Some of us avoid this paradox by emphasizing applications — known as ‘context’, and focusing on representations of problems more than procedural fluency.  With that contextual focus, we will seldom see the assessment paradox.  The challenge on the STEM path is that we need BOTH context with representation AND procedural fluency.

I’m sure most faculty have been aware of this ‘paradox’, and that this post does not have novel ideas for many of us.  I wonder, though, whether we continue to believe that students ‘understand’ because they correctly solved problems with more complexity.

 Join Dev Math Revival on Facebook:

Benny, Research, and The Lesson

The most recent MathAMATYC Educator (Vol 6, Number 3; May 2015) has a fascinating article “Benny Goes to College: Is the “Math Emporium” Reinventing Individually Prescribed Instruction? ” by Webel et al.  This article describes research in a emporium model using a popular text via a popular online system.  A group of students who passed the course and the final exam were interviewed; some standard word problems were presented, along with some less standard problems.

At the heart of the emporium’s approach to teaching and  learning we see the same philosophy that undergirded Benny’s IPI curriculum: the common sense idea that mathematics learning is best accomplished by practicing a skill until it is mastered.

I would phrase the last part differently, though you probably know what the authors mean … this is more of ‘the common mythology that mathematics …’ (common sense implies a reasonableness that seems lacking, given students attitudes about mathematics).

The phrase “Benny’s IPI” is a reference to a prior study by Erlwanger (1973) wherein the author looked at an individualized prescribed instruction (IPI) system; Benny was a similarly successful student who left the course with some very bothersome ideas about the types of topics that were ‘covered’ in the course.  In both studies, the primary method involved 3rd party interviews of students.

The current study had this as a primary conclusion:

We see students who successfully navigate an individualized program of instruction but who also exhibit critical misconceptions about the structure and nature of the content they supposedly had learned.

Although I am not a fan of emporium-related models, I am worried about the impact of this study.  These worries center on what the lesson is … what do we take away?  What does it mean?  The research does not compare methodologies, so there is no basis for saying that group-based or instructor-directed learning is better.  The authors make some good points about considering the goals of a course beyond skills or abilities.  However, I suspect that the typical response to this article will be one of two types:

  • Emporium models, and perhaps online homework systems, are clearly inferior; the research says so.
  • Emporium models, and online homework systems, just need some adjustment.

Neither of these are reasonable conclusions.

I spend quite a bit of time in my classes in short interviews with students.  Most of my teaching is done within the framework of a face-to-face class combining direct instruction with group work, with homework (online or not) done outside of class time.  Typically, I talk with each student between 5 and 15 times per semester; I get to know their thinking fairly well.  Based on my years of doing this, with a variety of homework systems (including print textbooks), I would offer the following observations:

  1. Misconceptions and partial understandings are quite common, even in the presence of good ‘performance’.
  2. Student understanding tends to be underestimated in an interview with an ‘expert’, at least for some students.

I have seen proposed mathematics that is equally wrong as that cited in the current study (or even worse); granted, these usually do not appear when talking to a student earning an A (as happened in the study) … though I am reluctant to generalize this to either my teaching or the homework system used.  Point 1 is basically saying that the easy assessments often miss the important ideas; a correct answer means little … even correct ‘work’ may not mean much.

Point 2 is a much more subjective conclusion.  However, I routinely see students show better understanding working alone than I hear when I talk with them; part of this would be the novice level understanding of mathematics, making it difficult to articulate what one knows … another part is a complex of expectations — social status — and instructor expectations by students.

Many of us are experiencing pressure to use “best practices”, to “follow the research”.  The problem is that good research supports a better understanding, but almost all research is used to advocate for particular ‘solutions’.  This is an old problem … it was here with “IPI”, is here now with “emporium”, and is likely to be with us for the next ‘solution’.

The “Lesson” is not “use emporium”, nor is it “do not use emporium”.  The lesson is more important than that, and involves each of us getting a more sophisticated (and more complicated) understanding of what it means to learn mathematics.  Most teachers seek this goal; the problems arise when policy makers and authorities see “research” and conclude that they’ve found the solution.  We need to be the voice for our profession, to state clearly why it is important to learn mathematics … to articulate what that means … to develop courses which help students achieve that goal … and use assessments that measure the entire spectrum of mathematical practice.

 Join Dev Math Revival on Facebook:

The Common Core State Standards and College Readiness

At the recent Forum on mathematics in the first two years (college), we had several very good presentations — some of these very short.  Among that group was one by Bill McCallum, a primary author of the mathematics portion of the Common Core State Standards.  Bill focused his comments on 9 expectations for the high school standards intended to represent college and career ready.

The expectations listed are:

  • Modeling with mathematics
  • Statistics and probability
  • Seeing algebra as based on a few coherent principles, not a
    multitude of unrelated techniques
  • Building and interpreting functions to represent relationships between quantities
  • Fluency
  • Understanding
  • Making sense of problems and persevering in solving them
  • Attending to precision
  • Constructing and critiquing arguments

Of these, Dr. McCallum suggested that fluency is the only one commonly represented in mathematics courses in the first two years.  The reaction of the audience suggested some agreement with this point of view.

So, here is our problem:  We included all 9 expectations when the Common Core standards were developed.  We generally support these expectations individually.  Yet, students can … in practice … do quite well if they arrive with a much smaller set of these capabilities.  Clearly, the Common Core math standards expect more than is needed.

What subset of the Common Core math expectations are ‘necessary and sufficient’ for college readiness?

For example, even though it is critical in the world around us, modeling does not qualify for my short list; neither does statistics and probability.

We are basically talking about the kinds of capabilities that placement tests should address  Measuring 9 expectations (all fairly vague constructs for measurement) is not reasonable; measuring 4, perhaps 5, might be.

I think we should develop a professional consensus around this question.  The answer will clearly help the K-12 schools focus on a critical core, and can guide the work of companies who develop our placement tests.

 Join Dev Math Revival on Facebook:

Quality Instruction and Class Design

Last year, my college created a new structure for departments and programs.  Instead of a chairperson for each department within the 3 academic divisions, we got associate deans and ‘faculty program chairs’.  The associate deans are the administrative players ‘in charge’ of two or three of our old departments.  In my case, math and science share an associate dean.  We have 7 faculty program chairs for the two departments; I am in the role of faculty program chair for developmental mathematics.  [Not much time provided in the workload, but the work is rewarding.]

Currently, I am focusing on one key idea for our program:

How do we create quality experiences for our students?

We want higher pass rates and completion (of course).  However, our students need classes that serve a real purpose.  Designing a course so that grades and scores are consistently higher than a student’s learning does not help students.  Some people talk about this under the umbrella of ‘grade inflation’, though our interest is in the striving for quality in instruction and class design.

So, here are some issues I have been thinking about:

  • Should any ‘points’ be awarded for completing homework?
  • Should points be awarded based on the level of performance during homework?
  • Does “dropping a low test” support or hinder a high quality class?
  • If a student does not come close to passing the final exam, should they get a passing grade if their other work creates a high enough ‘average’?
  • Is it okay if students with a 2.0 or 2.5 grade are not ready for the next math course?
  • Do high grades (3.5 and 4.0) uniformly mean that the student is ready for the next math course?

When courses are sequential, the preparation for the next math course is a critical purpose of a math class.  Assigning a passing grade, therefore, is a definite message to the student that they are ready to take the next class.  In practice, we know that this progression is seldom perfect — we usually provide some review in the next class, even though students ‘should’ know that material.  At this point, our efforts are dealing with the existing course outcomes, which tend to be more procedural than we would like; eventually, we will raise the reasoning expectations in our courses (with a corresponding reduction in procedural content).

Of special  interest to me are the issues related to homework.  Some faculty assign up to 25% of the course grade based on homework.  Like many places, we are heavy users of online homework systems (My Labs Plus as well as Connect Math).  When those systems work well for students, they support the learning process; most students are able to achieve a high ‘score’ on a homework assignment.  Should this level of achievement balance out a lower level on a test and/or final exam?  Take the scenario like this:

Derick completes all homework with a friend; with a lot of effort, his homework is consistently 90% and above.  All of Derick’s tests are between 61% and 68%, and he gets a 66% on the final exam.  The high homework average raises his course grade to 71%, and he receives a 2.0 (C) grade in the algebra class.

This scenario is a little extreme (it’s only possible with a high weight on homework … >15%).  What is fairly common is a situation where homework is 10% of the course grade and the student passes 2 of the 5 tests; one of of the 3 not-passed tests is ‘dropped’, and the student easily qualifies for a 2.0 (C) grade.  One of the cases I saw this past semester involved this type of student achieving a 52% on the final exam.

In our case,we already have a common department final exam for the primary courses (pre-algebra up to pre-calculus).  In the case of developmental courses, we have a policy that requires 25% of the course grade to be based on that final exam.  This design for the final exam is a good step towards the quality we are striving for.  We are realizing that we can not stop there.

Like most community colleges, our courses are taught by both full-time and adjunct faculty; the last figures I saw showed about 40% by full-time and 60% by adjunct.  Because adjuncts are not consistently engaged with our conversations, adjuncts tend to have more variations than full-time faculty.  We will be looking for ways to help our large group of adjuncts become better integrated within the program, even in the face of definite budgetary constraints.  Fortunately, many of my full-time colleagues are committed to helping these efforts to improve the quality of our program.

 Join Dev Math Revival on Facebook:

WordPress Themes