Category: Research connected to practice

Are Those Tests Evil? (ACT, SAT)

So, I have been doing some work on my college’s data relative to passing a math class correlated with either a placement test score or a score on the ACT Math section.  I shared some correlation data in an earlier post … this post is more about validity of measures.

One issue that has been impacting 4-year colleges & universities is the ‘test optional’ movement, where institutions make admissions decisions based on factors other than standardized tests.  This is an area of some research; one example is at http://www.act.org/content/dam/act/unsecured/documents/MS487_More-Information-More-Informed-Decisions_Web.pdf if you are interested.  Since I work at a community college, all of our admissions decisions are ‘test optional’.

Michigan uses standardized tests (ACT or SAT) as part of the required testing for students who complete high school, and the vast majority of our students do complete high school in Michigan.  Curiously, less than half of the students have standardized tests on their college record.  This both creates some interesting statistical questions and some practical problems.

For the past several years, the ACT has been that test for Michigan high school students (a switch was made to the SAT this year).  We use the ACT standard for ‘college readiness’, which is a 22 on the ACT Math section.  That standard was determined by the ACT researchers, using a criteria of “75% probability of passing college algebra” based on a very large sample of data.

A problem with this standard is that “college algebra” has several meanings in higher education.  For some people, college algebra is synonymous with a pre-calculus course; for others, college algebra is a separate course from pre-calculus.

My institution actually offers both flavors of college algebra; we have a “Pre-Calculus I” course as well as a “College Algebra” course.  The College Algebra course does not lead to standard calculus courses, but does prepare students for both applied calculus and a statistics course.  The Pre-Calculus I course is a very standard first semester course, and has a lower pass rate than College Algebra.  The prerequisite to both courses is one of (a) ACT Math 22 (b) Accuplacer College Level Math (CLM) test 55, or (c) passing our intermediate algebra course; all three of these provide the student with a “Math Level 6” indicator.  We assign a higher math level for scores significantly above the thresholds listed here.

So, here is what we see in one year’s data for the Pre-Calculus course:

  • ACT Math 22 to 25                      63% pass pre-calculus I
  • CLM  55 to 79                               81% pass pre-calculus I
  • passed Intermediate Algebra    71% pass pre-calculus I

The first two proportions are significantly different, and the first proportion is significantly different from the ‘75%’ threshold used by ACT.  One conclusion is that the ACT College Readiness standard is based more on other “college algebra” courses (not as much pre-calculus).

One of the things we find is that there is little correlation between the ACT and passing Pre-Calculus.  In other words, students with a 25 ACT Math are not any more likely to pass than those with a 22.  This is not quite as true with the CLM; the probability of passing increases (though slightly) with scores as they increase from the cutoff.

Now, a question is “why did so many students NOT provide the College with their ACT scores”?  Well, perhaps the better question … “Are those who did not provide the scores significantly different from those who did provide them?”  That is a workable question, though the data is not easy to come by.  The concern is that some types of students are more likely to provide the ACT scores (either white students or students from more affluent schools).

We’ve got reason to have doubts about using the ACT Math score as part of a placement cutoff, and reason to prefer the CLM for predictive power.

More of us need to analyze this type of data and share the results; very little research is available on validity issues of standardized tests done by practitioners.

 
Join Dev Math Revival on Facebook:

 

 

 

Don’t Tell, Active Learning, and Other Mythologies about Learning Mathematics

For one of the projects I’m involved with, I was providing feedback on a section providing concepts and suggestions for the use of active learning in college mathematics classrooms.   One of the goals of this project is to connect practice (teaching) with research on learning … a very worthy goal.

This particular section included “quotes from Vygotsky” (Lem [or Lev] Vygotsky); see https://en.wikipedia.org/wiki/Lev_Vygotsky for some info on his life work.  I put the reference in quotes (‘quotes from …’) because none of the quotes were from Vygotsky himself.  Vygotsky wrote in Russian, of course, and few of us can read the Russian involved; most “quotes” credited to Vygotsky are actually from Cole’s book “Mind in Society” (https://books.google.pn/books/about/Mind_in_Society.html?id=RxjjUefze_oC).  That book was “edited” by scholars who had a particular educational philosophy in mind, and used Vygotsky as a source (both translated and paraphrased).

I talked about that history because Vygotsky was an influential early researcher … in human development.  As far as I know, the overwhelming portion of his research dealt with fairly young children (2 to 6 years).  That original research has since been cited in support of a constructivist philosophy of education, which places individual discovery at the center of learning.

Most of the research in learning mathematics is based on macro-treatment packages.  The research does not show whether this particular feature of learning results in better learning … the research looks at treatments that combine several (or dozens) of treatment variables.  Some “educologists” use this macro-treatment research to support very particular aspects of those treatments (like inquiry based learning [IBL]).

The “don’t tell” phrase in the title of this post comes from the original NCTM standards, which told us not to tell (ironic?) based on some macro-treatment research.  I’ve never seen any research at the micro-level showing that “telling” is a bad thing to do.  Some of us, however, have concluded that the best way to teach any college math course (developmental or college level) is with discovery learning in context with an avoidance of ‘telling’.

I want to highlight some micro-level results from research, but first an observation … in addition to the problems listed above about macro-treatment research, the Vygotsky research dealt with children learning about material for which they had little prior learning.  In our math classes, the majority of students have had some prior exposure to the concepts up to pre-calculus; when these students are placed in to an IBL situation, the first thing that will happen is that the process will activate their prior knowledge (both good and bad).  This existence of prior knowledge complicates our design of the learning process.

So, here are some observations I offer based on decades of reading research as close to the micro-treatment level as possible.

  • Lecturing (un-interrupted talking by the instructor) can be effective as a component of learning.
  • Small group processes can be effective as a component of learning.
  • The effectiveness of either of those treatments depends upon the expertise and understanding of learning on the part of the teacher.
  • Teachers need to deliberately seek to develop expertise and understanding about learning the mathematics in their courses.
  • Students assume that their prior knowledge is sound and applies to everything.
  • The  amount (frequency) of formative assessment should be directly proportional to the amount of inaccurate prior knowledge in the students.
  • Feedback on student learning should not be instantaneous but timely, and qualitative feedback is just as important as information on accuracy.
  • The primary determinant of learning is student effort in dealing with the material at the understanding levels (as opposed to memorizing).
  • Repetition practice (blocked) is okay, though mixed practice (unblocked) is more effective.
  • Classrooms are complicated social structures, and the teacher does not have influence over significant portions of those structures.

Those are the “Rotman Ten”, presented without their references to research.  Many of them are based on a sabbatical I took a few years ago, and much of this is based on extraction from multiple sources.  A few (like blocked and unblocked practice) have an extremely sound historical basis in micro-treatment research.  None of them suggest that the adoption of a particular teaching method will result in general improvements.

Hopefully, you see some wisdom in that “Ten List”, and perhaps some food for thought.

 
Join Dev Math Revival on Facebook:

TBR and the Co-Requisite Fraud

Since many policy makers and academic leaders are telling us that we need to do (or consider) co-requisite remediation because of the results from the Tennessee Board of Regents (TBR), the TBR should release valid results … results which are consistent with direct observations by people within their system.  #TBR #Co-Requisite

Earlier this year, one of the TBR colleges shared their internal data for the past academic year, during a visit to my college.  This particular institution is not unusual in their academic setting, which is quite diverse.  Here is a summary of their data.

Foundations (intermediate algebra)          College: 61%

Math for Liberal Arts                                College: 52%

Statistics (Intro)                                      College: 40%

The TBR lists 51.7% as the completion rate for the same time period.  [See https://www.insidehighered.com/sites/default/server_files/files/TBR%20CoRequisite%20Study%20-%20Update%20Spring%202016%20(1).pdf]

Recently, I was able to have a short conversation with a mathematics faculty member within the TBR system.  The college administrator who visited earlier this year said that their mathematics faculty “would never go back” now that they have tried co-requisite remediation, suggesting that most faculty are now supporters.  The faculty member I talked with had some very strong language about the validity of the TBR data; the phrase “cooked the books” was used.  This internal voice certainly does not sound like a strong supporter, and suggests that there is deliberate tampering with the data.

There are two direct indicators of fraud in the TBR data.

  1. Intermediate Algebra (Foundations) was used in the data, even though it does not meet any degree requirement in the system.  [It is “college level”, but does not count for an AA or AS degree.]  Foundations had the highest pass rate for the college visiting; however, TBR does not release course-by-course results.]
  2. “Passing” is a 1.0 or higher, even though the norm for general education is a 2.0 or higher.  Again, the TBR does not release actual grade distribution.  The rate of D/1.0-1.5 grades can vary but is often 10% or higher.

The data is presented as passing (implied 2.0) a college math course (implied not developmental); the TBR violates both of these conditions.  If the data was financial instead of academic, the condition is called fraud … as in a corporation which manages to report a large profit instead of the reality of a very small profit.

Perhaps the TBR did not intentionally commit this fraud.  However, given that the leaders involved are experienced academics, that does not seem likely.  The errors I am seeing are too fundamental.

Of course, it is possible that both views from internal sources are incorrect.  I do not think that is as likely as the TBR data being incorrect.

My estimate of the ACTUAL completion rate of college math courses (liberal arts math and statistics) with a 2.0/C or higher:

30% to 40%  completion of college mathematics in corequisite remediation … NOT 50% to 60% as claimed by the TBR.

Whether I am correct in claiming fraudulent data reporting from the TBR, I am sure that the TBR needs to provide much better transparency in its reporting. Developmental education is being attacked and fundamentally altered by policy makers and external influencers whose most common rationale consists of the statement “Co-requisite remediation has to be a good thing … that has been proven by the Tennessee data!”.

Some readers may suggest that my wording of this post is overly-dramatic and not in keeping with the norms of academic discourse.  I think the dramatic tone is quite warranted considering the manner in which the TBR data has been used by Complete College America and others.  I agree that this post is not within the norms of academic discourse, but I believe that the tone is totally within the norms of the new reality of higher education:

Instead of discourse, over time, building upon prior results, we have allowed external influencers to determine the agenda for higher education.

If policy makers and leaders seek to push us in the direction they prefer and then use selected data to support this direction, then those policy makers and leaders can expect us to call them out for fraud and other inappropriate behavior.

It is time for the Tennessee Board of Regents to report their data in a way that allows the rest of us to examine the questions of ‘what is working’ in ‘which course’ under ‘what conditions’.

Enough of the fraud; it’s time to show us the truth about what, which, and conditions.

 Join Dev Math Revival on Facebook:

 

Understanding the Data on Co-Requisite Remediation

We need to change how we handle remediation at the college level, because the traditional system is based on weak premises … and the most common implementations are designed to fail for most students.  Where we have had 3 and even four remedial courses, we need to look at one for most students.

Because of that baseline, the fanatical supporters of “co-requisite remediation” are having a very easy time selling their concepts to policy makers and institutional leaders.  The Complete College America (CCA) website has an interactive report on this (http://completecollege.org/spanningthedivide/#remediation-as-a-corequisite-not-a-prerequisite   ) where you can see “22%” as the national norm for the rate of students starting in remediation who complete a college math course.  With that is a list of 4 states who have done co-requisite models … all of whom show 61% to 64%.

One obvious problem with the communication at the CCA site is that the original data sources are well hidden.   Where does the ‘22%’ value come from?  Is this all remediation relative to all college math?  The co-requisite structures almost always focus on non-algebraic math courses (statistics, quantitative reasoning).  One could argue that this issue is relatively trivial in the discussion; more on this later.

What is non-trivial is the source of the “61% to 64%”.

One of the community colleges from a co-requisite remediation state came to our campus and shared their detailed data … which makes it possible to explore what the data actually means.  Here are their actual success rates in the co-requisite model they are using:

Math for Liberal Arts: 52%

Statistics: 41%

These are pass rates for students in both the college math course and the remediation course in the same semester.  Another point in this data is that ‘success’ is considered to be a D or better.

For comparison, here are similar results from a college using prerequisite remediation, showing the rate of completing the college math course for those placing at the beginning algebra level.

Quantitative Reasoning: 53%

Statistics:  52%

In other words, if 100 students placed at the beginning algebra level in the fall … there were about 52 who passed their college math course in the spring.  Furthermore, this college considers ‘success’ to be a 2.0 or better.  The prerequisite model here has higher standards and equal (or higher) results.

The problem with the data on co-requisite remediation is that only high-level summaries (aggregations) are shared. Maybe the state average for the visiting college really is “61%” when they have about 45% (they have more in statistics than Liberal Arts).  Or, perhaps the data is being summarized for all students in the college course without separating those in the co-requisite course.  One hopes that the supporters are being honest and ethical in their communication.

I suspect that the skewing of the data comes more from the “22%”.  The source for this number usually includes all levels of remediation followed to any college math course (including pre-calculus).  The co-requisite data is a different measurement because the college course is limited (statistics, quantitative reasoning).

Another interesting thing about the data that was shared from the co-requisite remediation college is this statement:

Only about 20 students out of 1500 in co-requisite remediation had an ACT Math score at 15 or below.

At my institution, about 20% of our students have ACT Math scores at 15 or below.  Nationally, the ACT Math score of 15 is at the 15th percentile.  Why does one institution have about 1% in this range?  Is co-requisite remediation being used to create selective admission community colleges?  [Not by policy, obviously … but due to coincidental impacts of the co-requisite system.]

Sometimes I hear the phrase “a more nuanced understanding” relative to current issues in mathematics education.  I suppose that would be nice.  First, though, we need to start with a shared basic understanding.  We can not have that basic understanding as long as the data being thrown at us is ill-defined aggregate results lacking basic statistical validity.

Perhaps the co-requisite remediation data has statistical validity.  I tend to doubt that, as we use a peer review process to judge statistical validity … we we know that has not been the case for the co-requisite remediation data we are generally seeing (especially from the CCA).  The quality of their data is so bad that there would be a failing grade in most introductory statistics courses for a student doing that quality of work.  It’s discouraging to see policy leaders and administrators become profoundly swayed by statistics of such low quality.

Reducing ‘remediation’ to one measure is an obviously bad idea.

 Join Dev Math Revival on Facebook:

WordPress Themes