Are Those Tests Evil? (ACT, SAT)

So, I have been doing some work on my college’s data relative to passing a math class correlated with either a placement test score or a score on the ACT Math section.  I shared some correlation data in an earlier post … this post is more about validity of measures.

One issue that has been impacting 4-year colleges & universities is the ‘test optional’ movement, where institutions make admissions decisions based on factors other than standardized tests.  This is an area of some research; one example is at http://www.act.org/content/dam/act/unsecured/documents/MS487_More-Information-More-Informed-Decisions_Web.pdf if you are interested.  Since I work at a community college, all of our admissions decisions are ‘test optional’.

Michigan uses standardized tests (ACT or SAT) as part of the required testing for students who complete high school, and the vast majority of our students do complete high school in Michigan.  Curiously, less than half of the students have standardized tests on their college record.  This both creates some interesting statistical questions and some practical problems.

For the past several years, the ACT has been that test for Michigan high school students (a switch was made to the SAT this year).  We use the ACT standard for ‘college readiness’, which is a 22 on the ACT Math section.  That standard was determined by the ACT researchers, using a criteria of “75% probability of passing college algebra” based on a very large sample of data.

A problem with this standard is that “college algebra” has several meanings in higher education.  For some people, college algebra is synonymous with a pre-calculus course; for others, college algebra is a separate course from pre-calculus.

My institution actually offers both flavors of college algebra; we have a “Pre-Calculus I” course as well as a “College Algebra” course.  The College Algebra course does not lead to standard calculus courses, but does prepare students for both applied calculus and a statistics course.  The Pre-Calculus I course is a very standard first semester course, and has a lower pass rate than College Algebra.  The prerequisite to both courses is one of (a) ACT Math 22 (b) Accuplacer College Level Math (CLM) test 55, or (c) passing our intermediate algebra course; all three of these provide the student with a “Math Level 6” indicator.  We assign a higher math level for scores significantly above the thresholds listed here.

So, here is what we see in one year’s data for the Pre-Calculus course:

  • ACT Math 22 to 25                      63% pass pre-calculus I
  • CLM  55 to 79                               81% pass pre-calculus I
  • passed Intermediate Algebra    71% pass pre-calculus I

The first two proportions are significantly different, and the first proportion is significantly different from the ‘75%’ threshold used by ACT.  One conclusion is that the ACT College Readiness standard is based more on other “college algebra” courses (not as much pre-calculus).

One of the things we find is that there is little correlation between the ACT and passing Pre-Calculus.  In other words, students with a 25 ACT Math are not any more likely to pass than those with a 22.  This is not quite as true with the CLM; the probability of passing increases (though slightly) with scores as they increase from the cutoff.

Now, a question is “why did so many students NOT provide the College with their ACT scores”?  Well, perhaps the better question … “Are those who did not provide the scores significantly different from those who did provide them?”  That is a workable question, though the data is not easy to come by.  The concern is that some types of students are more likely to provide the ACT scores (either white students or students from more affluent schools).

We’ve got reason to have doubts about using the ACT Math score as part of a placement cutoff, and reason to prefer the CLM for predictive power.

More of us need to analyze this type of data and share the results; very little research is available on validity issues of standardized tests done by practitioners.

 
Join Dev Math Revival on Facebook:

 

 

 

Don’t Tell, Active Learning, and Other Mythologies about Learning Mathematics

For one of the projects I’m involved with, I was providing feedback on a section providing concepts and suggestions for the use of active learning in college mathematics classrooms.   One of the goals of this project is to connect practice (teaching) with research on learning … a very worthy goal.

This particular section included “quotes from Vygotsky” (Lem [or Lev] Vygotsky); see https://en.wikipedia.org/wiki/Lev_Vygotsky for some info on his life work.  I put the reference in quotes (‘quotes from …’) because none of the quotes were from Vygotsky himself.  Vygotsky wrote in Russian, of course, and few of us can read the Russian involved; most “quotes” credited to Vygotsky are actually from Cole’s book “Mind in Society” (https://books.google.pn/books/about/Mind_in_Society.html?id=RxjjUefze_oC).  That book was “edited” by scholars who had a particular educational philosophy in mind, and used Vygotsky as a source (both translated and paraphrased).

I talked about that history because Vygotsky was an influential early researcher … in human development.  As far as I know, the overwhelming portion of his research dealt with fairly young children (2 to 6 years).  That original research has since been cited in support of a constructivist philosophy of education, which places individual discovery at the center of learning.

Most of the research in learning mathematics is based on macro-treatment packages.  The research does not show whether this particular feature of learning results in better learning … the research looks at treatments that combine several (or dozens) of treatment variables.  Some “educologists” use this macro-treatment research to support very particular aspects of those treatments (like inquiry based learning [IBL]).

The “don’t tell” phrase in the title of this post comes from the original NCTM standards, which told us not to tell (ironic?) based on some macro-treatment research.  I’ve never seen any research at the micro-level showing that “telling” is a bad thing to do.  Some of us, however, have concluded that the best way to teach any college math course (developmental or college level) is with discovery learning in context with an avoidance of ‘telling’.

I want to highlight some micro-level results from research, but first an observation … in addition to the problems listed above about macro-treatment research, the Vygotsky research dealt with children learning about material for which they had little prior learning.  In our math classes, the majority of students have had some prior exposure to the concepts up to pre-calculus; when these students are placed in to an IBL situation, the first thing that will happen is that the process will activate their prior knowledge (both good and bad).  This existence of prior knowledge complicates our design of the learning process.

So, here are some observations I offer based on decades of reading research as close to the micro-treatment level as possible.

  • Lecturing (un-interrupted talking by the instructor) can be effective as a component of learning.
  • Small group processes can be effective as a component of learning.
  • The effectiveness of either of those treatments depends upon the expertise and understanding of learning on the part of the teacher.
  • Teachers need to deliberately seek to develop expertise and understanding about learning the mathematics in their courses.
  • Students assume that their prior knowledge is sound and applies to everything.
  • The  amount (frequency) of formative assessment should be directly proportional to the amount of inaccurate prior knowledge in the students.
  • Feedback on student learning should not be instantaneous but timely, and qualitative feedback is just as important as information on accuracy.
  • The primary determinant of learning is student effort in dealing with the material at the understanding levels (as opposed to memorizing).
  • Repetition practice (blocked) is okay, though mixed practice (unblocked) is more effective.
  • Classrooms are complicated social structures, and the teacher does not have influence over significant portions of those structures.

Those are the “Rotman Ten”, presented without their references to research.  Many of them are based on a sabbatical I took a few years ago, and much of this is based on extraction from multiple sources.  A few (like blocked and unblocked practice) have an extremely sound historical basis in micro-treatment research.  None of them suggest that the adoption of a particular teaching method will result in general improvements.

Hopefully, you see some wisdom in that “Ten List”, and perhaps some food for thought.

 
Join Dev Math Revival on Facebook:

Multiple Measures: How Consistent are ACT Math and Accuplacer

Like many institutions, mine allows students to place into a math course via a variety of methods.  The most common methods are the ACT Math score and the Accuplacer College Level Math (CLM) test.  I ran into a reference to a university which concluded that the ACT Math score was not a reliable predictor.

So, I’m posting a quick summary of how those two instruments agree (or not).  As part of our normal program improvement and curricular work, I have gathered information on about 800 students who were enrolled in our pre-calculus course.  Obviously, this is not a random sample of all ACT Math and all CLM scores.  However, given the selection, the two instruments should have a reasonable amount of consistency.

There were 122 students with both ACT Math and CLM scores.  Of these:

  • 74 had scores on both that produce the same course placement (61%)
  • 48 had scores such that different course placements result (39%)

The vast majority of the ‘disagreement’ involved a higher ACT Math placement than CLM placement.  A quick comparison shows that students placing based on ACT Math have a lower pass rate than those who place based on the CLM.  I’ve got some more work to do in analyzing the data before identifying a hypotheses about that pattern.

For that sample of 122 students with both scores, there is a significant correlation (about 0.32).  That correlation is somewhat limited by the sample, which tends to emphasize relatively high scores (skewed distribution).  Even with that limitation, I was concerned about the small size of the correlation … I’d expect a ‘native’ correlation (all data) of about 0.7, and a reduction to 0.5 would be reasonable given the skewed sample.  That 0.32 is pretty small for these two measures.

Most of us use “alternate measures” (this method OR that method); low consistency between methods means our error rates will increase with the ‘or’.  If the low consistency holds up in further analysis, we should either use the most reliable predictor … or true multiple measures where we use some combination of data to determine a course placement.

I began looking at our data because I could not find studies looking at the correlation and relative placement strength of our two measures.  If you are aware of a study that provides that type of information, I’d appreciate hearing about it.

 Join Dev Math Revival on Facebook:

TBR and the Co-Requisite Fraud

Since many policy makers and academic leaders are telling us that we need to do (or consider) co-requisite remediation because of the results from the Tennessee Board of Regents (TBR), the TBR should release valid results … results which are consistent with direct observations by people within their system.  #TBR #Co-Requisite

Earlier this year, one of the TBR colleges shared their internal data for the past academic year, during a visit to my college.  This particular institution is not unusual in their academic setting, which is quite diverse.  Here is a summary of their data.

Foundations (intermediate algebra)          College: 61%

Math for Liberal Arts                                College: 52%

Statistics (Intro)                                      College: 40%

The TBR lists 51.7% as the completion rate for the same time period.  [See https://www.insidehighered.com/sites/default/server_files/files/TBR%20CoRequisite%20Study%20-%20Update%20Spring%202016%20(1).pdf]

Recently, I was able to have a short conversation with a mathematics faculty member within the TBR system.  The college administrator who visited earlier this year said that their mathematics faculty “would never go back” now that they have tried co-requisite remediation, suggesting that most faculty are now supporters.  The faculty member I talked with had some very strong language about the validity of the TBR data; the phrase “cooked the books” was used.  This internal voice certainly does not sound like a strong supporter, and suggests that there is deliberate tampering with the data.

There are two direct indicators of fraud in the TBR data.

  1. Intermediate Algebra (Foundations) was used in the data, even though it does not meet any degree requirement in the system.  [It is “college level”, but does not count for an AA or AS degree.]  Foundations had the highest pass rate for the college visiting; however, TBR does not release course-by-course results.]
  2. “Passing” is a 1.0 or higher, even though the norm for general education is a 2.0 or higher.  Again, the TBR does not release actual grade distribution.  The rate of D/1.0-1.5 grades can vary but is often 10% or higher.

The data is presented as passing (implied 2.0) a college math course (implied not developmental); the TBR violates both of these conditions.  If the data was financial instead of academic, the condition is called fraud … as in a corporation which manages to report a large profit instead of the reality of a very small profit.

Perhaps the TBR did not intentionally commit this fraud.  However, given that the leaders involved are experienced academics, that does not seem likely.  The errors I am seeing are too fundamental.

Of course, it is possible that both views from internal sources are incorrect.  I do not think that is as likely as the TBR data being incorrect.

My estimate of the ACTUAL completion rate of college math courses (liberal arts math and statistics) with a 2.0/C or higher:

30% to 40%  completion of college mathematics in corequisite remediation … NOT 50% to 60% as claimed by the TBR.

Whether I am correct in claiming fraudulent data reporting from the TBR, I am sure that the TBR needs to provide much better transparency in its reporting. Developmental education is being attacked and fundamentally altered by policy makers and external influencers whose most common rationale consists of the statement “Co-requisite remediation has to be a good thing … that has been proven by the Tennessee data!”.

Some readers may suggest that my wording of this post is overly-dramatic and not in keeping with the norms of academic discourse.  I think the dramatic tone is quite warranted considering the manner in which the TBR data has been used by Complete College America and others.  I agree that this post is not within the norms of academic discourse, but I believe that the tone is totally within the norms of the new reality of higher education:

Instead of discourse, over time, building upon prior results, we have allowed external influencers to determine the agenda for higher education.

If policy makers and leaders seek to push us in the direction they prefer and then use selected data to support this direction, then those policy makers and leaders can expect us to call them out for fraud and other inappropriate behavior.

It is time for the Tennessee Board of Regents to report their data in a way that allows the rest of us to examine the questions of ‘what is working’ in ‘which course’ under ‘what conditions’.

Enough of the fraud; it’s time to show us the truth about what, which, and conditions.

 Join Dev Math Revival on Facebook:

 

WordPress Themes