Factors in Student Performance: Improving Research

Our data work, and our research, in collegiate mathematics education tends to be simple in design and ambiguous in results.  We often see good or great initial results with a project, only to see regression towards the mean over time (or worse).  I’d like to propose a more complete analysis of the problem space.

The typical data collection or research design involves measuring student characteristics … test scores, HS GPA, prior college work, grades in math classes, etc.  For classical laboratory research, this would be equivalent to measuring the subjects without measuring the treatment effects directly.

So, think about measurements for our ‘treatments’.  If we are looking into the effectiveness of math courses, the treatments are the net results of the course and the delivery of that course.  Since we often dis-aggregate the data by course, we at least ‘control’ for those effects.  However, we are not very sophisticated in measuring the delivery of the course — in spite of the fact that we have data available to provide some levels of measurement.

As an example, we offer many sections of pre-calculus at my college.  Over a period of 4 years, there might be 20 distinct faculty who teach this course.  A few of these faculty only teach one section in one semester; however, the more typical situation is that a faculty member routinely teaches the same course … and develops a relatively consistent delivery treatment.

We often presume (implicitly) that the course outcomes students experience are relatively stable across instructor treatment.  This presumption is easily disproved, and easily compensated for.

Here is a typical graph of instructor variation in treatment within one course:

 

 

 

 

 

 

 

 

 

 

We have pass rates ranging from about 40% to about 90%, with the course mean (weighted) represented by the horizontal line at about 65%.  As a statistician, I am not viewing either extreme as good or bad (they might both be ‘bad’ as a mathematician); however, I am viewing these pass rates as a measure of the instructor treatment in this course.  Ideally, we would have more than one treatment measure.  This one measure (instructor pass rate) is a good place to start for practitioner ‘research’. In analyzing student results, the statistical issue is:

Does a group  of students (identified by some characteristic) experience results which are significantly different from the treatment measure as estimated by the instructor pass rate?

The data set then includes a treatment measure, as well as the measurements about students.  In regression, we then include this ‘instructor pass rate’ as a variable.  When there is substantial variation in instructor treatment measures, that variable often is the strongest correlate with success.  If we attempt to measure student results without controlling for this treatment, we can report false positives or false negatives due to that set of confounding variables. Another tool, then, is to compute the ‘gain’ for each student.  The typical binary coding (1=pass 2.0/C; 0=else) is used, but then subtract the instructor treatment measure from this.  Examples:

  • Student passes, instructor pass rate = .64 … gain = 1-.64 = .36
  • Student does not pass, instructor pass rate = .64 … gain = 0-.64 = -.64

When we analyze something like placement test scores versus success, we can graph this gain by the test score:

 

 

 

 

 

 

 

 

 

 

This ‘gain’ value for each score shows that there is no significant change in student results until the ACT Math score is 26 (well above the cutoff of 22).   This graph is from Minitab, which does not report the n values for each group; as you’d expect the large confidence interval for a score of 28 is due to the small n (6 in this case).

That conclusion is hidden if we look only at the pass rate, instead of the ‘gain’.  This graph shows an apparent ‘decreased’ outcome for scores of 24 & 25 … which have an equal value in the ‘gain’ graph above:

 

 

 

 

 

 

 

 

The main point of this post is not how our pre-calculus course is doing, or how good our faculty are.  The issue is ‘treatment measures’ separate from student measures.  One of the primary weaknesses of educational research is that we generally do not control for treatments when comparing subjects; that is a fundamental defect which needs to be corrected before we can have stable research results which can help practitioners.

This is one of the reasons why we should not trust the ‘results’ reported by change agents such as Complete College America, or Jobs For the Future, or even the Community College Research Center.  Not only do the treatment measures vary by instructor at one institution, I am pretty sure that they vary across institutions and regions.  Unless we can show that there is no significant variation in treatment results, there is no way to trust ‘results’ which reach conclusions just based on student measures.

 Join Dev Math Revival on Facebook:

Intermediate Algebra … the Barrier Preventing Progress

The traditional math curriculum in colleges is significantly resistant to change and progress; I talked about some of the reasons for this condition in a recent post about the Common Core & the Common Vision related to the future of college mathematics (see https://www.devmathrevival.net/?m=201703  )  We carry some historical baggage which creates additional forces resisting efforts to make progress in the curriculum at the college level.

Our “Intermediate Algebra” course occupies a position of power.  First … it has long served as the only accepted demarcation between “college level” and courses which are not.  AMATYC recently approved a position statement to help clarify this demarcation (see http://www.amatyc.org/?page=PositionInterAlg )   Second … it has been used as the prerequisite to both college algebra and pre-calculus, which contradicts the origin of intermediate algebra as a copy of HS algebra II (which was never designed for this prerequisite role).

I’ve written previously about the need for Intermediate Algebra to be intentionally removed from the college curriculum; see https://www.devmathrevival.net/?p=2347

Intermediate Algebra must die … now!

Recently, we’ve had some email discussion in my state about the credential requirements for faculty … especially those teaching “intermediate algebra”.  Although we all want to provide students with quality faculty for every math course, we don’t agree on what this means.  Like most accrediting bodies, ours makes a distinction between developmental courses and general education courses; developmental courses require that faculty have a degree at least one level above what they teach … while general education courses require that faculty have 18 graduate credits in the field they are teaching.

Because of that credentialing difference, faculty teaching college mathematics courses tend to be functionally separate from those teaching developmental math courses (unless at a small institution).  A consequence of this faculty split is that the interface zone (intermediate algebra to college algebra in particular) is difficult to change in basic ways.  Faculty with a STEM focus are more concerned with their ‘upper level’ courses (calculus, linear algebra, etc), while those with a developmental focus are often more concerned with the beginning algebra level.

Intermediate algebra, just by its presence in our curriculum, is a barrier to making progress in modernizing our work.  If we were to remove Intermediate Algebra as a course, both levels of mathematics faculty would (by necessity) work together to create a more reasonable replacement.  If Intermediate Algebra had never existed, do you think we would create that same course now?  Obviously, no … we would do something much more reasonable.

Intermediate Algebra must die … now!

Efforts to ‘improve’ intermediate algebra typically involve micro-adjustments (different mix of skills).  Changes of this type have been tried over the past 30 years (or more) with almost no impact on any problem or outcome.  Our problems have become severe enough that no set of micro-changes will create a solution … we need macro-changes.

We need to remove the barrier — get rid of your intermediate algebra course (and mine!).  Replace it with a modern course like Algebraic Literacy (https://www.devmathrevival.net/?page_id=2312) if that makes sense to you.  Or, create a different solution for the problems.  Of course, part of the solution is to keep some of the students out of any course at the intermediate algebra level — developmental but preparing for college algebra.  Intermediate algebra is certainly not needed as preparation for statistics or quantitative reasoning at the college level.

Some of us are having a strong response to this proposal (of removing the intermediate algebra barrier).  If you live in a state that has a policy of ‘intermediate algebra for general education in college’, or your institution has such a policy, you are experiencing another reason why intermediate algebra is a barrier that must be removed.  Intermediate algebra is a copy (sometimes quite weak) of an old high school mathematics course in an era when the overwhelming majority of our students have experienced more advanced mathematics in their high school.  This was true before ‘the Common Core’, and is becoming more true as time goes on.

Intermediate Algebra must die … now!

We can create viable solutions, with modern courses about current mathematical needs, if we are just willing to toss this one course from our curriculum.  Intermediate Algebra must die, and die soon.  It is a barrier to progress that we … and our students … need urgently.  Don’t wait for a replacement to be ‘ready’ — the solution will be ready when we are committed to make a change.

Which of these is your choice?

  • Eliminate intermediate algebra at your institution effective Fall 2018
  • Eliminate intermediate algebra at your institution effective Fall 2019
  • Eliminate intermediate algebra at your institution effective Fall 2020
  • Ignore the intermediate algebra problem, and hope it goes away by itself.

 Join Dev Math Revival on Facebook:

Common Core, Common Vision, and Math in the First Two Years

I’ve been thinking about these ideas anyway.  However, a recent comment on a blog post here got me ready to make a post about predicting the future of mathematics in the first two years.  I’d like to be optimistic … past experiences would cause considerable pessimism.   The truth likely lies between.

One of the “45 years of dev math” posts resulted in this comment from Eric:

If Back2Basics is what drifted up to CC Dev Math programs back then, what do you see the impact of CommonCore being on CC Dev Math now?

This post was about the early 1980s, when we had an opportunity to go beyond the grade level approach of the existing dev math courses (one course per grade, replicating content).  Instead of progress, we retrenched … resulting in courses which were subsets of outdated K-12 courses.  Much of the current criticism of dev math is based on these obsolete dev math courses.

We again have an opportunity to advance our curriculum.  This time, the opportunity exists for all mathematics in the first two years.

  • The K-12 math world is changing in response to the Common Core State Standards.  Even if politics takes away the assessments for that content, many states and districts have already implemented a curriculum in response to the Common Core.  (see http://www.corestandards.org/Math/)
  • The college math world is responding to the Common Vision (see http://www.maa.org/sites/default/files/pdf/CommonVisionFinal.pdf) which is beginning the process of articulating a set of standards for curriculum and instruction in the first two years.  AMATYC is developing a document providing guidance to faculty & colleges on implementing these standards.  [I’m on the writing team for the AMATYC document.]

The two sets of forces share quite a bit in terms of the nature of the standards.  For example, both K-12 and college standards call for significant increases in numeric methods (statistics and modeling) along with a more advanced framework for what it means to ‘learn mathematics’.

These consistent parallels in the two sets of forces would suggest that the future of college mathematics is bright, that we are on the verge of a new age of outstanding mathematics taught by skilled faculty resulting in the majority of students achieving their dreams.  This is the optimistic prediction mentioned at the start.

On the other hand, we have some prior experiences with basic change.  One example is the ‘lean and lively calculus’ movement (conference and publications in 1986 & 1989).  It is very sad that we had to modify ‘calculus’ with something suggesting ‘good’ (lean & lively) … the very nature of calculus deals with coping with change and determining solutions for problems over time.  As you know, this movement had very little long-term impact on the field (outside of some boutique programs) while the “Thomas Calculus” continues to be taught much like it has been for the past 50 years.

Here are some factors in why we find it so difficult to change college mathematics (the levels beyond developmental mathematics).

  1. Professional isolation:  membership in professional organizations is low among faculty teaching in the first two years.  The vast majority of us lead isolated professional lives with limited opportunities to interact with the professional standards.
  2. Adjunct faculty as worker bees: especially in community colleges, adjunct faculty teach a large portion of our classes … but are separated from the curriculum change processes.  The existing curriculum tends to be limited by these artificial asymptotes  created by our perceptions and the desire to save money by the institution.
  3. Autonomy and pride:  especially full-time faculty tend to place too high an emphasis on autonomy & academic freedom, with the false belief that there is something inherently ‘good’ about opposing all efforts to change the courses the person teaches.  Although most prevalent at universities, this ‘pride’ malady is also a serious infection at community colleges.

I’ve certainly missed some other factors.  These three factors represent strong and difficult to control forces within a complex system of higher education.  Thus, I consider the pessimistic view that ‘nothing will change, really’.

I think there is a force strong enough to overcome these forces restraining progress in our field.  You’d like to know the nature of this strong force?

The attraction of teaching ‘good mathematics’ is fundamental in the make up of mathematicians teaching in college.  If faculty can see a clear path to having more ‘good mathematics’, nothing will stop them from following this path.

If the Common Core, the Common Vision, and the AMATYC new standards can connect with this desire to teach ‘good mathematics’, we will achieve something closer to the optimistic prediction.  The New Life Project has experienced some of this type of inspiration of faculty.  Perhaps AMATYC will create a new project to bring that inspiration to a larger group of faculty teaching in the first two years.

One thing we know for certain about the future:  the future will look very much like the present and the past unless a group of people work together to create something better.  I would like to think that our profession is ready for this challenge.

Are you ready to become engaged with the process of creating a better future for college mathematics?

 
Join Dev Math Revival on Facebook:

The Big Missed Opportunity: Forty Five Years of Dev Math, Part III

This is part of a series of posts reflecting on our history in developmental mathematics … especially at community colleges in the USA.  We’ve talked about the ‘origins’, about a ‘golden age’ (or not), and now we move to the first half of the 1980s.

Two major movements were active at about the same time in the early 1980s … one dealt with placement policies, and the other dealt with the content of mathematics courses at this level.  When more than one movement is impacting a profession at the same time, there is always an opportunity for fundamental change.  That is not what happened in this case, and we continue to deal with the ‘incorrect’ responses to that opportunity.

The use of standardized assessments for placement was widespread (though with varied instruments) at the start of this period, as we moved from home-grown placement measures to assessments used at a larger scale (state, region, or nation).  Those tracking data quickly noticed that these measures, often used with mandatory placement, were impacting certain groups at a disproportionate rate.  In some cases, the items on the assessments had been tested for bias; even with tests using only these tested items, the results showed an uncomfortable level of differential impact.

Clearly, “something” had to be done.  A professional response might have been to develop an effective short term intervention that would equalize the results.  Another professional response might have been to establish collaborations between community college math faculty and the local K-12 school’s math program.  In general, neither of those responses occurred.  Instead, there was a decline in the rate of mandatory placement:

Students have the right to fail.  If they disagree with the placement measures, they can take the higher course.

I still hear this “right to fail” statement, which I see as a abrogation of our responsibilities:  We let students make a decision known to put them at unnecessary risk (we knew they were likely to fail).  Most colleges did not continue this ‘worst practice’ (as opposed to best practice), with the result that the placement system continued to have a differential impact on known groups of students.  That problem continues to the present day, as a general condition.  [Some colleges, systems, and states use either placement systems that moderates the impact (true multiple measures) OR have implemented new curricula which make the results more tolerable (pathways).]

For some history of placement policies, see https://ccrc.tc.columbia.edu/media/k2/attachments/college-placement-strategies-evolving-considerations-practices.pdf  .

The content movement impacting developmental mathematics in the early 1980s was a ‘trickle-up’ reaction to K-12 math reform in the prior decade or two.  The K-12 math reform is usually called “new math”, which failed because the curriculum was designed by university math education professors with little attention to the teachers who would try to deliver it.  Even though we can see the “DNA” from this New Math within the modern curricular standards of NCTM, AMATYC, and MAA, there was a back-lash in K-12 that drifted up to college … “BACK TO BASICS”.

There were very few college level books that implemented New Math designs; most were (and still are) very similar to the K-12 math predated New Math.  However, here was an opportunity for college math faculty to create developmental mathematics courses with balanced and effective approaches to multiple levels of learning — including reasoning and communication.  Our collective response was to regress even further on the levels we sought to deliver in our curriculum.  We reduced the amount of reading in our books, added examples, grouped the student practice by type, and generally made choices guaranteed to limit the student benefit for their efforts.

The two movements (right to fail, back to basics) involved forces that could have had that synergy necessary for significant long-term change.  We should have had one response to resolve both issues … change our curriculum in a basic way so that entering memory levels of particular skills do not determine success; rather, the entering level of understanding would determine success.

In my view,  the “New Life Project” represents this type of approach with developmental courses that are far less sensitive to remembered skills (Math Literacy, Algebraic Literacy), which means that they are far more accessible to all parts of our student population.  The fact that this solution appeared and gained support 30 years after the first opportunity indicates to me that our profession has been resistant to progress.  It’s not that dev math did not change between 1985 and 2010; it’s that all of the other changes did not address the core problems we face.  We needed other external forces acting upon our work before we were willing to try something different enough to possibly make real progress towards helping all students succeed.

We currently are in the ‘next big opportunity’ to make progress.  Let’s be sure to do things this time that will get us significantly closer to our goals.

 
Join Dev Math Revival on Facebook:

WordPress Themes