Culture of Evidence … Does it Exist? Could it Exist??

Perhaps you are like me … when the same phrase is used so extensively, I develop an allergic-type reaction to the phrase.  “Awesome” is such a phrase, though my fellow educators do not use that phrase nearly as much as our students.  However, we use ‘culture of evidence’, and I have surely developed a reaction to it.

Part of my reaction goes back to a prior phrase … “change the culture”, used quite a few years ago to describe the desire to alter other people’s beliefs as well as their behavior.  Education is based on a search for truth, which necessarily implies individual responsibility for such choices.  Since I don’t work for Buzz Feed nor Complete College America, my priority is on education in this classic sense.

The phrase “culture of evidence” continues to be used in education, directed at colleges in particular.  One part of this is a good thing, of course … encouraging the use of data to analyze problems.  However, that is not what the phrase means.  It’s not like people say “apply the scientific method to education”; I can get behind that, though we need to remember that a significant portion of our work will remain more artistic and intuitive than scientific.  [Take a look at https://www.innovativeeducators.org/products/assessing-summer-bridge-developing-a-culture-of-evidence-to-support-student-success for example.]

No, this ‘culture of evidence’ is not a support for the scientific method.  Instead, there are two primary components to the idea:

  • Accountability
  • Justification by data

Every job and profession comes with the needs for accountability; that’s fine, though this is the minor emphasis of ‘culture of evidence’.

The primary idea is the justification by data; take a look at the student affairs professional viewpoint (https://www.naspa.org/publications/books/building-a-culture-of-evidence-in-student-affairs-a-guide-for-leaders-and-p  ) and the Achieving The Dream perspective (http://achievingthedream.org/focus-areas/culture-of-evidence-inquiry  ).

All of this writing about “culture of evidence” suggests that the goal is to use statistical methodologies in support of institutional mission.  Gives it a scientific sound, but does it make any sense at all?

First of all, the classic definition of culture (as used in the phrase) speaks to shared patterns:

Culture: the set of shared attitudes, values, goals, and practices that characterizes an institution or organization  (Merriam-Webster online dictionary)

In an educational institution, how many members of the organization will be engaged with the ‘evidence’ as justification, and how are they involved?  The predominant role is one of data collection … providing organizational data points that somebody else will use to justify what the organization wants to justify.  How can we say ‘culture of evidence’ when the shared practice is recording data?  For most people, it’s just part of their job responsibilities … nothing more.

Secondly, what is this ‘evidence’?  There is an implication that there are measurements possible for all aspects of the institutional mission.  You’ve seen this — respected institutions are judged as ‘failures’ because the available measurements are negative.  I’m reminded of an old quote … the difference between the importance of measurements versus measuring the important.

There is also the problem of talking about ‘evidence’ without the use of statistical thinking or designs.  As statisticians, we know that ‘statistics’ is used to better understand problems and questions … but the outcome of statistics is frequently that we have more questions to consider.

No, I think this “culture of evidence” phrase describes both an impossible condition and a undesirable goal.  We can’t measure everything, and we can’t all be statisticians.  Nor should we want judgments about the quality of an institution to be reduced to summative measures of a limited set of variables covering a limited range of ‘outputs’ in education.

The ‘culture of evidence’ phrase, and it’s derivatives (‘evidentiary basis’, for example) are used to suggest a scientific practice without any commitment to the scientific method.  As normally practiced, ‘culture of evidence’ often conflicts with the scientific method (to support pre-determined answers or solutions) and has little to do with institutional culture.

Well, this is what happens when I have an allergic reaction to the written word … I have a need to write about it!

 Join Dev Math Revival on Facebook:

Why We Will Stop Doing Pathways in Mathematics

Currently, and for the past few years, “pathways” has been a big thing in community college mathematics education.  For students not needing calculus or similar courses, alternate paths have been established — with a focus on courses such as Statway™, Mathematical Literacy, and Foundations of Mathematical Reasoning.  The fact that all three of those courses are very similar in content is not an accident, and the fact that the three organizations involved collaborated is a key reason for their success.

The reasoning behind the creation of pathways is essentially “give them what they need, not what they don’t need”.  Students with a pre-calculus target are still placed into the old-fashioned developmental math courses, and students with other targets are placed into a ‘pathway’.  All students are generally required to meet some arithmetic criteria before starting at the Math Literacy level or beginning algebra.

My own work has certainly played a role in this creation of pathways.  However, that was not the intent of the efforts beginning this work.  Neither do pathways have a good prognosis for long-term survival.

Let’s go through some of the reasons why “pathways” are not a long-term strategy.

Reason 1: Pathways are a dis-service to “STEM” (calculus-bound) students!
The original design of the major pathways courses (Quantway™, Math Literacy and Foundations of Mathematical Reasoning) was based on identifying what all students needed in college-level mathematics — statistics, quantitative reasoning, AND pre-calculus.  These outcomes were then categorized in two clusters … those needed by ALL students became the core of the Math Literacy course, and those primarily needed by pre-calculus students became the core of Algebraic Literacy.  [Algebraic Literacy also includes some outcomes needed for technical programs.]

In effect, “pathways” is preventing STEM (calculus-bound) students from getting the learning they need for success.  We have accumulated data showing the the traditional developmental algebra courses do not add significant vale for these students when they take pre-calculus.  In addition, we also know that the traditional courses were not designed for this purpose — they were designed to replicate the 9th to 11th grade content of a 1970’s high school.

Pathways create a better experience for non-STEM students, at the price of harming (relatively) those bound for pre-calculus.

Reason 2: Curricular complexity costs too much
One of the extreme cases I have seen is a college with SIX different courses at the Math Literacy level.  Clearly, half of these are quite specialized for students in particular occupational programs.  However, half were general in nature — a Math Literacy course, and two basic algebra courses.

Curricular complexity raises the cost of support functions at an institution, advising in particular.  Few colleges can support this extra work in the long-term, even when the initial launch of those efforts is strongly supported by the then-current administration & governing board.  As time goes on, the focus on advising slips … mistakes are made … and a later administration will question why things are so complicated.

This curricular complexity also raises costs within the mathematics department.  More courses at the same level means more difficult scheduling, less predictable enrollments in each course, and a host of faculty coordination issues.  Unless an institution has excess resources not needed for other situations, the mathematics department will realize in a few years that they can not support the complex curriculum.

Reason 3: Pathways allow the continuation of arithmetic courses at colleges
The presence of arithmetic courses at a college involves several problems and costs; the fact that our profession has not accepted these are overwhelming rationales for discontinuing arithmetic courses is a failure with moral and economic dimensions.

First of all, these extra courses at the developmental level are primarily taken by students of poverty and minorities.  This is the moral dimension for us:  these are the students coming to college to get out of poverty, who are then required to take one or more courses prior to the course that is a prerequisite to their required course.  No possible benefit from learning arithmetic can justify this process; in fact, there is no evidence of any significant benefit for taking such arithmetic courses in college.

Secondly, arithmetic courses in a college create costs for the mathematics department. We often have a fairly discreet set of faculty (heavily adjunct), and these faculty are seldom qualified to teach a college mathematics course.  In many colleges, the arithmetic courses are administered in a separate department.  As faculty, we should want to design a curriculum that does not depend on a course at the arithmetic level.

Thirdly, the presence of arithmetic courses at a college will tend to perpetuate the outdated focus on procedures and answers.  This conflicts with the design of Math Literacy, and impedes development of basic reasoning needed even in a traditional basic algebra course.

Reason 4: External Forces Will Continue to Push Us To Change
So far, the evaluation of ‘pathways’ has focused exclusively on the impact for students taking Math Literacy (or companion course) as preparation for statistics or quantitative reasoning courses — specifically, students who enroll in stat or QR after passing Math Literacy.

Curricular complexity means that there will be a less successful experience for students needing pre-calculus … by definition, because those students need two courses (beginning algebra, intermediate algebra) compared to the one & done of Math Lit.  There are also operational causes for other ‘bad’ data to show up — students taking Math Literacy instead of the course they were supposed to take, for example.

In addition, we can predict that these change agents will critique our developmental math courses compared to modern standards (whether Common Core, or NCTM standards).  We are not ready for this critique, and have no response for the results that are bound to come from such a critique — that developmental mathematics operates as if the year is still 1975, ignorant of the fundamental changes in our students’ experiences in K-12 mathematics.

 

In a way, I am reminded of something I learned at a conference session on graph theory and traffic design.  Our intuition might say that it is better to have more options in street designs, where there are several north-south options and several east-west options.  The traffic design results were the opposite … that the best throughput for a traffic system is the fewest possible streets.

A pathways curricular design presumes the presence of at least two courses at the same level in a sequence.  This design is not particularly stable, as a system.  In the long term, I think the system will collapse down to one of the options.

We need to be prepared for the demise of pathways so that we can maintain the improvements from those efforts.  The danger is in assuming that both Math Literacy AND the old courses will ‘always’ be there.  Within a few years, one of them will be gone.  Which type of course do YOU want to survive?

 Join Dev Math Revival on Facebook:

Factors in Student Performance: Improving Research

Our data work, and our research, in collegiate mathematics education tends to be simple in design and ambiguous in results.  We often see good or great initial results with a project, only to see regression towards the mean over time (or worse).  I’d like to propose a more complete analysis of the problem space.

The typical data collection or research design involves measuring student characteristics … test scores, HS GPA, prior college work, grades in math classes, etc.  For classical laboratory research, this would be equivalent to measuring the subjects without measuring the treatment effects directly.

So, think about measurements for our ‘treatments’.  If we are looking into the effectiveness of math courses, the treatments are the net results of the course and the delivery of that course.  Since we often dis-aggregate the data by course, we at least ‘control’ for those effects.  However, we are not very sophisticated in measuring the delivery of the course — in spite of the fact that we have data available to provide some levels of measurement.

As an example, we offer many sections of pre-calculus at my college.  Over a period of 4 years, there might be 20 distinct faculty who teach this course.  A few of these faculty only teach one section in one semester; however, the more typical situation is that a faculty member routinely teaches the same course … and develops a relatively consistent delivery treatment.

We often presume (implicitly) that the course outcomes students experience are relatively stable across instructor treatment.  This presumption is easily disproved, and easily compensated for.

Here is a typical graph of instructor variation in treatment within one course:

 

 

 

 

 

 

 

 

 

 

We have pass rates ranging from about 40% to about 90%, with the course mean (weighted) represented by the horizontal line at about 65%.  As a statistician, I am not viewing either extreme as good or bad (they might both be ‘bad’ as a mathematician); however, I am viewing these pass rates as a measure of the instructor treatment in this course.  Ideally, we would have more than one treatment measure.  This one measure (instructor pass rate) is a good place to start for practitioner ‘research’. In analyzing student results, the statistical issue is:

Does a group  of students (identified by some characteristic) experience results which are significantly different from the treatment measure as estimated by the instructor pass rate?

The data set then includes a treatment measure, as well as the measurements about students.  In regression, we then include this ‘instructor pass rate’ as a variable.  When there is substantial variation in instructor treatment measures, that variable often is the strongest correlate with success.  If we attempt to measure student results without controlling for this treatment, we can report false positives or false negatives due to that set of confounding variables. Another tool, then, is to compute the ‘gain’ for each student.  The typical binary coding (1=pass 2.0/C; 0=else) is used, but then subtract the instructor treatment measure from this.  Examples:

  • Student passes, instructor pass rate = .64 … gain = 1-.64 = .36
  • Student does not pass, instructor pass rate = .64 … gain = 0-.64 = -.64

When we analyze something like placement test scores versus success, we can graph this gain by the test score:

 

 

 

 

 

 

 

 

 

 

This ‘gain’ value for each score shows that there is no significant change in student results until the ACT Math score is 26 (well above the cutoff of 22).   This graph is from Minitab, which does not report the n values for each group; as you’d expect the large confidence interval for a score of 28 is due to the small n (6 in this case).

That conclusion is hidden if we look only at the pass rate, instead of the ‘gain’.  This graph shows an apparent ‘decreased’ outcome for scores of 24 & 25 … which have an equal value in the ‘gain’ graph above:

 

 

 

 

 

 

 

 

The main point of this post is not how our pre-calculus course is doing, or how good our faculty are.  The issue is ‘treatment measures’ separate from student measures.  One of the primary weaknesses of educational research is that we generally do not control for treatments when comparing subjects; that is a fundamental defect which needs to be corrected before we can have stable research results which can help practitioners.

This is one of the reasons why we should not trust the ‘results’ reported by change agents such as Complete College America, or Jobs For the Future, or even the Community College Research Center.  Not only do the treatment measures vary by instructor at one institution, I am pretty sure that they vary across institutions and regions.  Unless we can show that there is no significant variation in treatment results, there is no way to trust ‘results’ which reach conclusions just based on student measures.

 Join Dev Math Revival on Facebook:

Intermediate Algebra … the Barrier Preventing Progress

The traditional math curriculum in colleges is significantly resistant to change and progress; I talked about some of the reasons for this condition in a recent post about the Common Core & the Common Vision related to the future of college mathematics (see https://www.devmathrevival.net/?m=201703  )  We carry some historical baggage which creates additional forces resisting efforts to make progress in the curriculum at the college level.

Our “Intermediate Algebra” course occupies a position of power.  First … it has long served as the only accepted demarcation between “college level” and courses which are not.  AMATYC recently approved a position statement to help clarify this demarcation (see http://www.amatyc.org/?page=PositionInterAlg )   Second … it has been used as the prerequisite to both college algebra and pre-calculus, which contradicts the origin of intermediate algebra as a copy of HS algebra II (which was never designed for this prerequisite role).

I’ve written previously about the need for Intermediate Algebra to be intentionally removed from the college curriculum; see https://www.devmathrevival.net/?p=2347

Intermediate Algebra must die … now!

Recently, we’ve had some email discussion in my state about the credential requirements for faculty … especially those teaching “intermediate algebra”.  Although we all want to provide students with quality faculty for every math course, we don’t agree on what this means.  Like most accrediting bodies, ours makes a distinction between developmental courses and general education courses; developmental courses require that faculty have a degree at least one level above what they teach … while general education courses require that faculty have 18 graduate credits in the field they are teaching.

Because of that credentialing difference, faculty teaching college mathematics courses tend to be functionally separate from those teaching developmental math courses (unless at a small institution).  A consequence of this faculty split is that the interface zone (intermediate algebra to college algebra in particular) is difficult to change in basic ways.  Faculty with a STEM focus are more concerned with their ‘upper level’ courses (calculus, linear algebra, etc), while those with a developmental focus are often more concerned with the beginning algebra level.

Intermediate algebra, just by its presence in our curriculum, is a barrier to making progress in modernizing our work.  If we were to remove Intermediate Algebra as a course, both levels of mathematics faculty would (by necessity) work together to create a more reasonable replacement.  If Intermediate Algebra had never existed, do you think we would create that same course now?  Obviously, no … we would do something much more reasonable.

Intermediate Algebra must die … now!

Efforts to ‘improve’ intermediate algebra typically involve micro-adjustments (different mix of skills).  Changes of this type have been tried over the past 30 years (or more) with almost no impact on any problem or outcome.  Our problems have become severe enough that no set of micro-changes will create a solution … we need macro-changes.

We need to remove the barrier — get rid of your intermediate algebra course (and mine!).  Replace it with a modern course like Algebraic Literacy (https://www.devmathrevival.net/?page_id=2312) if that makes sense to you.  Or, create a different solution for the problems.  Of course, part of the solution is to keep some of the students out of any course at the intermediate algebra level — developmental but preparing for college algebra.  Intermediate algebra is certainly not needed as preparation for statistics or quantitative reasoning at the college level.

Some of us are having a strong response to this proposal (of removing the intermediate algebra barrier).  If you live in a state that has a policy of ‘intermediate algebra for general education in college’, or your institution has such a policy, you are experiencing another reason why intermediate algebra is a barrier that must be removed.  Intermediate algebra is a copy (sometimes quite weak) of an old high school mathematics course in an era when the overwhelming majority of our students have experienced more advanced mathematics in their high school.  This was true before ‘the Common Core’, and is becoming more true as time goes on.

Intermediate Algebra must die … now!

We can create viable solutions, with modern courses about current mathematical needs, if we are just willing to toss this one course from our curriculum.  Intermediate Algebra must die, and die soon.  It is a barrier to progress that we … and our students … need urgently.  Don’t wait for a replacement to be ‘ready’ — the solution will be ready when we are committed to make a change.

Which of these is your choice?

  • Eliminate intermediate algebra at your institution effective Fall 2018
  • Eliminate intermediate algebra at your institution effective Fall 2019
  • Eliminate intermediate algebra at your institution effective Fall 2020
  • Ignore the intermediate algebra problem, and hope it goes away by itself.

 Join Dev Math Revival on Facebook:

WordPress Themes