Curriculum and Instructional Improvement

We are doing some things at my College that might be of interest to others — not the type of thing we do a presentation on, though the information might be helpful.

Like many community colleges, our developmental mathematics courses have some of the highest enrollments on campus.  Therefore, these courses have large number of sections and are taught by a wide variety of faculty — full time/part time, new / experienced, rigid / flexible, etc.  Like many colleges, we follow student progression in the courses as closely as we can.  In the traditional courses (pre algebra, beginning algebra, intermediate algebra) the primary goal of each course has been to prepare students for the next course.  This progression data is not as good as we would like; nothing new there!

So, here is one thing we are doing about the problem.  We wrote a survey to be taken by instructors in a subsequent course.  In this survey, we listed the course outcomes for the prior course.  The survey asked the instructor to rank how important that outcome is, in preparing students for success in the subsequent course.  [The survey itself is being delivered through “Lime Survey”, a nice platform for surveys.]

The first survey asked intermediate algebra instructors about what students needed from beginning algebra.  We are currently working on the results (we had 16 surveys returned, from a pool of 33).    We are looking at the survey results as part of a process involving much discussion, rather than saying “this topic has got to be deleted because nobody needed it …”.  Our content in these courses is a little unusual in that few topics are covered in both courses — systems of equations is only in beginning algebra here, for example, as is most graphing concepts like slope.  Factoring polynomials is one of the few overlapping topics, which is likely why those outcomes were highly rated by instructors for intermediate algebra.

Another area we are looking at is instructional quality.  We have had a common departmental final exam for these courses for many years; we all use the same exams, and grade them with a common rubric.  However, much remains for each instructor to determine — points for attendance?  points for homework?  drop one low test?  making up tests?  We are working on providing instructors with feedback about how their choices impact the student’s probability of success in the next course.  One tool we are starting to use is an easy data-reporting tool that each instructor completes for each course:

Student Pre-final average Final exam percent Final course average
Abbott
Costello
Brooks
Cabrera

The goal here is not to identify individual student issues; we are looking for patterns.  Does a given instructor have a large difference between the pre-final average and the final exam score?  Does an instructor have a large number of students who fail the final exam but pass the course?  [The final exam is required, but passing it is not required.]

We’ve also begun doing a “lesson study” method.  In our modified process, a group of instructors decides on a small topic to focus on, such as integer exponents.  The group talks about the topic — which is usually part of one class day: what makes this difficult?  what do students miss?  what shows understanding in students?  The group then creates a plan for the lesson, and some faculty use this in class while other faculty observe; this happens in 2 to 4 classes.  After these observations, the group meets to debrief … it’s about the lesson, not about students or instructors directly: what went well?  did students understand?  do some parts of the lesson need improvement?  Ideally, the class lessons are video taped for use in this debriefing, though we have not done that yet.  The debriefing itself is very educational, and we would like to record this so other instructors can experience the conversation.  The lesson study process is methodical and focused on the long term, one piece at a time; after a year, we have finished one lesson.

We are finding that a search for curricular and instructional quality is a long road; no maps are available so we are not sure that any particular action will lead to good results.  We do know that the process will lead to improvements if the conversation is centered in the hands of faculty.  None of this work is for administrative purposes.  Our goal is to help every instructor become better over time, and we see the administrative actions as issues of last resort.  We share ownership of the courses we teach, so this is not an issue of “I have the answers … not pay ATTENTION!”; it is more of an issue of professionalism for all instructors.

Hopefully, you found something of interest!

 Join Dev Math Revival on Facebook:

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.

WordPress Themes