Maybe it’s Not “Men of Color”: Equity in College Mathematics, data part II

A recent post here looked at a summary of pass rates based on “Pell eligibility” and race, where Pell eligibility is used as an indicator of possible poverty.  Take a look at https://www.devmathrevival.net/?p=2791 .  The basic message was that the outcomes for black students was significantly lower and that part of this difference seems related to the impact of poverty.

Today, I wanted to follow that up with some similar data on the role of gender (technically, ‘sex’) in the outcomes of students, accounting for poverty and race.  This seems especially important given the national attention to “men of color” (http://cceal.org/about-cceal/).  As a social justice issue, I agree that this focus on MEN of color is important given the unequal incarceration rates.

However, this is what I see in our data for all Pell eligible students in math courses:

 

 

 

 

 

 

 

 

As for the prior chart, this reflects data over a 6 year period … which means that the ‘n’ values for each group are large (up to 10000 for ‘white’).  Given those sample sizes, almost any difference in proportions is statistically significant.  All three comparisons ‘point’ in the same direction — females have higher outcomes than males, within each racial group.

However, notice that the ‘WOMEN” of color have lower outcomes than men “without color”  (aka ‘white’). A focus on men of color, within mathematics education, is not justified by this data.  Here is what I see …

  • There is a ‘race thing’ … unequal outcomes for blacks and hispanics, compared to white students.
    [This pattern survives any disaggragation by other factors, such as different courses and indicators of preparation.]
  • There is a ‘sex thing’ … unequal outcomes for men, compared to women.
    [This difference is smaller, and does NOT survive some disaggregations.]

There is a large difference in ‘effect size’ for these; the black ‘gap’ in outcomes approaches 20 percentage points (about  2/3 of the white pass rate), while the ‘male’ gap is 5 percentage points or less (90% to 96% of the female pass rate).  In other words, it does not help to be a woman of color; it just hurts less than being a man of color.

I think that pattern fits the social context in the United States.  The trappings of discrimination have been fashioned in to something that looks less disturbing, without addressing the underlying problems.  We have actually retreated in this work, from the period of 40 to 50 years ago; there was a time when college financial aid was deliberately constructed as a tool in this work, and this was effective from the information I have seen.  Current college policies combined with the non-supportive financial aid system results in equity gaps for PEOPLE of color.

Most of us have a small role in this work, but this does not mean the role is unimportant.  If your department and institution are critiquing your impact on people of color, terrific; I hope we have an opportunity to share ideas on solutions.  If your department or institution are not deeply involved in this work, why not?  We have both the professional and moral responsibility to consider the differential impact of our work, including unintended consequences.

 Join Dev Math Revival on Facebook:

Equity in College Mathematics: What does the data tell us about poverty and race?

I am very proud of my department for our decision to do some serious work on equity.  We are having focused discussions at meetings and in hallways, we are bringing up equity in other discussions, and have examined quite a bit of data.  I want to highlight a little bit of that data.  This post will focus on the role of poverty in the pursuit of equity in college mathematics.

Like many colleges, my institution provides access to a centralized data reporting function (“Argos” in our case).  We can use this database to extract and summarize data related to our courses, and the database includes some student characteristics (such as race, ethnicity, and sex … self-reported).  In addition, the database connects to direct institutional records dealing with enrollment status and financial aid.  The primary piece of data from the financial aid record is a field called “Pell Eligible”.

As you know, Pell Grants are based on need; this usually means an annual income of less than $30,000.  Students are not required to apply, even if they would qualify for the maximum award.  However, we do know that students do not receive a Pell ‘award’ unless they have a low income.  For us, this “Pell Eligibility” is the closest thing we have to a poverty indicator.

When we summarize student grades by race and Pell Eligibility (across ALL courses in our department), this is the result.

 

 

 

 

 

 

 

 

 

This graph has two “take aways” for me.  First, poverty is likely associated with lower rates of passing.  Secondly, the impact of race on outcomes is even stronger.  Note that the “Pell” group is lower than the non-Pell group for all races, and that the “Black non-Pell” group has lower outcomes than the non-Pell hispanics or whites.

The situation is actually worse than this chart suggests.  The distribution of ‘poverty’ (as estimated by Pell eligibility) is definitely unequal: 70% of the black group is Pell eligible, while only 40% of the white group is Pell eligible (with hispanics at a middle rate).

I am seeing a strong connection between our goal of promoting equity and the goals of social justice.  As long as significant portions of our population live in poverty, we will not achieve equity in the mathematics classroom … awarding ‘financial aid’ does not cancel out the impacts of poverty.  In addition, as long as some groups in our population are served by under-resourced and struggling schools, we will not achieve equity in the mathematics classroom.  This latter statement refers to the fact that many states have policies like Michigan’s which allow those with resources to have a choice about ‘better schools’, while limiting state funding for public schools (and simultaneously attacking the teaching profession).

In our region, the majority of the black students attending my college came from the urban school district.  This urban school district had a proud history through the 1980s, with outcomes equal to any suburban school in the area.  However, dramatic changes have occurred … even though that district has made significant progress in recent years, there is no doubt that the urban schools are not preparing students for college.  Poverty plays a role within that school district, and the interaction between race and poverty is again unequal: more blacks live in poverty within the city than other races.

The social justice movement seeks to provide all groups with equal access to upward mobility, combined with a reasonably high probability of escaping poverty, based on a presumption of effort.  Barriers to progress are addressed as systemically as possible.  College mathematics is currently one of the barriers to progress in social justice.  Modern curricula do not solve this barrier, given the data I’ve seen (though we are early in that process of change).

If we see our role as separate from equity and social justice, we are enabling the inequities to continue.  This is a set of issues that we can not remain silent about.  Even if we are not committed to social justice, we need to work on these barriers for the good of our profession.  You might begin by discussing social justice issues with your friends or colleagues who teach sociology or anthropology, quite a few of whom have a background in ‘social problems’.

Join Dev Math Revival on Facebook:

Normalizing a Bad Curriculum: Forty Five Years of Dev Math, Part IV

This is another entry in a series of posts looking back at developmental mathematics history.  Previous posts dealt with origins, a golden age, and a missed opportunity … and now we look at the last half of the 1980s.

It might be difficult to believe that there was a time before people talked about standards.  The first great effort on standards came from NCTM in 1989 (“Curriculum and Evaluation Standards”, summarized at http://www.mathcurriculumcenter.org/PDFS/CCM/summaries/standards_summary.pdf   ) which was a follow up to “An Agenda for Action (http://www.nctm.org/Standards-and-Positions/More-NCTM-Standards/An-Agenda-for-Action-(1980s)/ ).  Whether these standards were even discussed at a college was more a coincidence of faculty connections than any organizational cooperation.

The period we are talking about preceded these initial standards.  However, collaborative activity across institutions and regions was increasing in the latter 1980s.  It is not a coincidence that my first AMATYC conference was in 1987 (“Going to Kansas City” theme song).  We, as a profession, were looking for stability and support.  The AMATYC Developmental Mathematics Committee (DMC) had several active subcommittees on issues such as “Student Learning Problems” and “Minimal Competencies”, as well as “Handheld Calculators”. I served as the editor of the DMC Newsletter for several years, a newsletter produced by printing stuff on a dot-matrix printer and physically cutting & pasting to make the pages of the newsletter.  Ah, for the good old days …

We entered this period having missed the great opportunity, which naturally led to the primary outcome of the time:

The existing pre-college and college curriculum was normalized and accepted as a “good thing”, or at least “the way it should be”.

Some of us knew that NCTM was working on their standards, though none of us were involved in any way (no community college faculty served on a team or as a writer).  In this period prior to the first AMATYC work on standards, we explicitly supported the grade-course structure (from K-12) which had been our inheritance. When a problem was identified (such as low pass rates), our response was to double-down … we created split courses for beginning algebra, and split courses for intermediate algebra; we often added a basic math course separate from a pre-algebra course.  This double-down trend resulted in horrific sequences for students.  We often went from our old sequence of 3 courses to a system where some students took 9 terms or semesters of developmental mathematics.  [These structures still exist, relatively intact, in some places … parts of California, for example.]

Another aspect of the ‘double-down’ response was an attempt to identify THE list of critical learning outcomes.  The DMC “MinComps” (minimal competency) subcommittee worked by snail mail and annual meetings to identify the arithmetic skills that all students should possess.   Although MinComps never achieved their goal of writing a position statement on this content, the group did have an impact on our courses and the textbooks used in those courses.

Never was our response to ask “What are the mathematical abilities which students need for college success and life success?”  The response was ‘what outcomes should be in this course?’.  There was a trend, especially during this time period, to have our textbooks converge to a common list of content topics and outcomes (very skill based).  Workbooks were very popular in this period, often consisting of ‘name topic, state property, show example, give practice’.  In some ways, the ‘programmed learning’ textbooks of a decade earlier were more supportive of student learning.

The content became the thing.  When students did not succeed, we looked to identify a student learning problem.  In some cases, we even tried to provide support to ‘overcome’ a student learning problem.  Our efforts were directed at improving course pass rates … at the expense, frequently, of the sequence pass rate.  Our friend ‘exponential attrition’ is very powerful …  a sequence of 5 courses will always be worse than a sequence of 3 courses, unless we can realize close a 50% improvement in course pass rate.  Going from 45% pass in all 3 courses to 67% pass rate in all 5 courses is not likely; if it has ever happened, you can be pretty sure that this improvement was temporary.

Since we were relatively ignorant of the sequence and attrition issues, we were pleased with longer sequences which reinforced the defective content we had inherited and then had normalized.

My younger colleagues will have a difficult time understanding the technological context for this work.  When we had the “hand held calculator” subcommittee in the DMC, we were not talking about graphing calculators — the work was focused on basic calculators, with a recognition that scientific calculators were available.  Our offices had computers (very slow) with no networking; I had a dial-up modem to connect to a nearby mainframe, but that was quite unusual.  We often hand-wrote our tests (and it’s a miracle that any student could pass such tests!).  Later in this period, we had the  initial efforts to provide students with access to computers — often done in a separate computer classroom, not related to any math course.  Homework, like our tests, were a hand written affair.

This technology did not cause any change in the content or delivery of instruction.  If anything, the status of the technology was part of the set of forces which led to the normalization of the defective content in college mathematics.  Our motto seemed to be “We don’t know if this stuff is really worth much, but at least we generally agree that it is what we should be doing because we are all doing roughly the same thing.”  Many math faculty today continue to look at curriculum primarily from this lens.

The trend in this period to normalize the defective content contributed to our response in the next period (the early 1990s) when the NCTM standards suggested that such content was, indeed, defective.  We had set up conditions which made us essentially immune to the  valid critiques.  That is where the next post will look at our history.

 
Join Dev Math Revival on Facebook:

Culture of Evidence … Does it Exist? Could it Exist??

Perhaps you are like me … when the same phrase is used so extensively, I develop an allergic-type reaction to the phrase.  “Awesome” is such a phrase, though my fellow educators do not use that phrase nearly as much as our students.  However, we use ‘culture of evidence’, and I have surely developed a reaction to it.

Part of my reaction goes back to a prior phrase … “change the culture”, used quite a few years ago to describe the desire to alter other people’s beliefs as well as their behavior.  Education is based on a search for truth, which necessarily implies individual responsibility for such choices.  Since I don’t work for Buzz Feed nor Complete College America, my priority is on education in this classic sense.

The phrase “culture of evidence” continues to be used in education, directed at colleges in particular.  One part of this is a good thing, of course … encouraging the use of data to analyze problems.  However, that is not what the phrase means.  It’s not like people say “apply the scientific method to education”; I can get behind that, though we need to remember that a significant portion of our work will remain more artistic and intuitive than scientific.  [Take a look at https://www.innovativeeducators.org/products/assessing-summer-bridge-developing-a-culture-of-evidence-to-support-student-success for example.]

No, this ‘culture of evidence’ is not a support for the scientific method.  Instead, there are two primary components to the idea:

  • Accountability
  • Justification by data

Every job and profession comes with the needs for accountability; that’s fine, though this is the minor emphasis of ‘culture of evidence’.

The primary idea is the justification by data; take a look at the student affairs professional viewpoint (https://www.naspa.org/publications/books/building-a-culture-of-evidence-in-student-affairs-a-guide-for-leaders-and-p  ) and the Achieving The Dream perspective (http://achievingthedream.org/focus-areas/culture-of-evidence-inquiry  ).

All of this writing about “culture of evidence” suggests that the goal is to use statistical methodologies in support of institutional mission.  Gives it a scientific sound, but does it make any sense at all?

First of all, the classic definition of culture (as used in the phrase) speaks to shared patterns:

Culture: the set of shared attitudes, values, goals, and practices that characterizes an institution or organization  (Merriam-Webster online dictionary)

In an educational institution, how many members of the organization will be engaged with the ‘evidence’ as justification, and how are they involved?  The predominant role is one of data collection … providing organizational data points that somebody else will use to justify what the organization wants to justify.  How can we say ‘culture of evidence’ when the shared practice is recording data?  For most people, it’s just part of their job responsibilities … nothing more.

Secondly, what is this ‘evidence’?  There is an implication that there are measurements possible for all aspects of the institutional mission.  You’ve seen this — respected institutions are judged as ‘failures’ because the available measurements are negative.  I’m reminded of an old quote … the difference between the importance of measurements versus measuring the important.

There is also the problem of talking about ‘evidence’ without the use of statistical thinking or designs.  As statisticians, we know that ‘statistics’ is used to better understand problems and questions … but the outcome of statistics is frequently that we have more questions to consider.

No, I think this “culture of evidence” phrase describes both an impossible condition and a undesirable goal.  We can’t measure everything, and we can’t all be statisticians.  Nor should we want judgments about the quality of an institution to be reduced to summative measures of a limited set of variables covering a limited range of ‘outputs’ in education.

The ‘culture of evidence’ phrase, and it’s derivatives (‘evidentiary basis’, for example) are used to suggest a scientific practice without any commitment to the scientific method.  As normally practiced, ‘culture of evidence’ often conflicts with the scientific method (to support pre-determined answers or solutions) and has little to do with institutional culture.

Well, this is what happens when I have an allergic reaction to the written word … I have a need to write about it!

 Join Dev Math Revival on Facebook:

WordPress Themes