## Assessment: Is this “what is wrong” with math education?

I have been thinking about a problem. This problem is seen in too many of our students … after passing a test (or a course) the proficiency level is still low and understanding fragile. Even accepting the fact that not all students achieve high levels of learning, the results are disappointing to us and sometimes tragic for students.

Few concepts are more basic to mathematics than ‘order of operations’, so we “cover” this topic in all developmental math classes … just like it’s covered in most K-12 math classes. In spite of this, college students fail items such as:

- Simplify 12 – 9(2 – 7) ÷(5)(3)
- Write 3x²y without exponents

I could blame these difficulties on the inaccurate crutch called “PEMDAS”, and it’s true that somebody’s aunt sally is causing problems. I might explore that angle (again).

However, I think the basic fault is fundamental to math education at all levels. This fault deals with the purpose of assessment. Our courses are driven by outcomes and measurable objectives. What does it mean to “correctly use exponential notation”? Does such an outcome have an implication of “know when this does not apply?” Or, are we only interested in completion of tasks following explicit directions with no need for analysis?

Some of my colleagues consider the order of operations question above to be ‘tricky’, due to the parentheses showing a product. Some of my colleagues also do not like multiple choice questions. However, I think we often miss the greatest opportunities in our math classes.

Students completing a math course successfully should have fundamentally different capabilities than they had at the start.

In other words, if all we do is add a bunch of specific skills, we have failed. Students completing mathematics are going to be asked to either apply that knowledge to novel situations OR show conceptual reasoning. [This will happen in further college courses and/or on most jobs above minimum wage.] The vast majority of mathematical needs are not just procedural, rather involve deeper understanding and reasoning.

Our assessments often do not reach for any discrimination among levels of knowledge. We have a series of problems on ‘solving’ equations … all of which can be solved with the same basic three moves (often in the same order). Do we ask students ‘which of these problems can be solved by the addition or multiplication properties of equality?’ Do we ask students to ‘write an equation that can not be solved just by adding, subtracting, multiplying or dividing?’

For order of operations, we miss opportunities by not asking:

Identify at least two DIFFERENT ways to do this problem that will all result in the same (correct) answer.

When I teach beginning algebra, the first important thing I say is this:

Algebra is all about meaning and choices.

If all students can do is follow directions, we should not be surprised when their learning is weak or when they struggle in the next course. When our courses are primarily densely packed sequences of topics requiring a rush to finish, students gain little of value … those procedures they ‘learn’ [sic] during the course have little to no staying power, and are not generally important anyway.

The solution to these problems is a basic change in assessment practices. Analysis and communication, at a level appropriate for the course outcomes, should be a significant part of assessment. My own assessments are not good enough yet for the courses I am generally teaching; the ‘rush to complete’ is a challenge.

Which is better: 100 objectives learned at a rote level OR 60 objectives learned at some level of analysis?

This is a big challenge. The Common Core math standards describe a K-12 experience that will always be a rush to complete; the best performing students will be fine (as always) … others, not so much. Our college courses (especially developmental) are so focused on ‘procedural’ topics that we generally fail to assess properly. We often avoid strong types of assessment items (such as well-crafted multiple-choice items, or matching) with the false belief that correct steps show understanding.

We need conversations about which capabilities are most important for course levels, followed by a re-focusing of the courses with deep assessment. The New Life courses (Math Literacy, Algebraic Literacy) were developed with these ideas in mind … so they might form a good starting point at the developmental level. The risk with these courses is that we might not emphasize procedures enough; we need both understanding and procedures as core goals of any math class.

Students should be fundamentally different at the end of the course.

Join Dev Math Revival on Facebook:

### 4 Comments

### Other Links to this Post

RSS feed for comments on this post. TrackBack URI

By Laura Bracken, September 19, 2016 @ 11:32 amTwo things stand out to me: If all students can do is follow directions, we should not be surprised when their learning is weak or when they struggle in the next course. and the We often avoid strong types of assessment items…

In conversations about assessment, it is difficult to get past the assumption of many faculty that showing work, step by step, is the gold standard for showing understanding. Minds are closed to the possibility that multiple choice questions are gold mines for identifying misconceptions — students rarely randomly guess but rather show us their assumptions (if the question is carefully crafted and tested.)

Being fundamentally different must mean more than an increase in algebraic proficiency when confronted with a familiar problem, even in an algebra course.

Nice post, Jack.

By Jack Rotman, September 26, 2016 @ 7:44 amThanks, Laura!

By Eric, September 22, 2016 @ 9:33 amYes, yes yes! I completely agree. And therefore I despair.

You would be interesed in Robert Kaplinski’s work on Depth of Knowledge in k-12 mathematics:

http://robertkaplinsky.com/tool-to-distinguish-between-depth-of-knowledge-levels/

I found his blog recently, and I was embarrassed by my own reaction of: none of my students could handle any of the questions beyond Level 1 – and when I try to ask those deeper questions on an assessment, I nearly get a revolt! When I ask them in class, my students require so much hand-holding, that one not-completely-superficial nongraded question per week is about all I have time for.

I work at the same institution as A. Schremmer, and our department tried to implement a new, deeper curriculum a few years ago. While they demonstrated increased learning, pass rates overall did not improve, so the administration killed the project. The department tried to sustain the standards by using a department final exam with nontrivial problems – but overall averages were consistently in the 30s, so instructors had to either admit that 90% of their class understood the material insufficiently, or keep pass rates artificially above 40% by making their own assessments significantly easier. If we add deeper (and therefore harder) questions to our own assessments, then current semester pass rates will drop below their current 45%. This is a complete nonstarter with our (and many other colleges’) administration(s) – even if long-term success (due to better conceptual understanding leading to better success at college-level courses) eventually improved.

If neither my administration nor my students have any interest in the idea that “students should be fundamentally different at the end of the course,” I weary of pursuing such a goal (though I do find encouragement in your advocacy).

By Jack Rotman, September 26, 2016 @ 7:43 amThanks … I appreciate you sharing the info and story.