Wednesday, October 12, 2005

More on Agency Issues in Teaching and Learning

Before I get started with my topic let me toot my own horn for a second and note that while everyone is abuzz about the Blackboard and WebCT “merger” I wrote what now seems to be a prescient piece on the CMS industry structure back in May. Maybe that means the rest of my posts aren’t all blather.

* * * * * *

I want to return to the agency framework from yesterday because once one starts to think that way it opens up insights into a bunch of related issues and gives a different way to consider them. Also, I want to make it clear that I’m not picking on the students. There are agency issues with the faculty and agency issues with campus administration, as well.

Let’s begin with course evaluation. It’s part of the institutional culture here and I suspect at many other campuses, but really, it makes no sense whatsoever as it currently is practiced. It relies on students giving their opinions at or near the end of the semester in a manner from which they derive no obvious benefit. In the paper administration their is an element of coercion (the students can’t leave the room until they submit the form to the collector) but in online administration the coercion is absent. The low response rates that almost everyone reports for online administration is an acknowledgement that students don’t want to complete course evaluations. So, first and foremost, the current approach suffers from the problem that student buy in to the process has not been elicited. Yet their opinion is valued nonetheless.

At my campus there are two questions on the evaluation that count in the faculty member promotion and tenure decision (and possibly in their salary decision): (1) Rate the instructor and (2) Rate the course. Outliers on either extreme are singled out, low performers either get reassigned to other courses or if they continue to perform poorly then they are denied tenure or promotion, while outliers on the other extreme become candidates for teaching awards. And perhaps there is some information content in these outlier evaluations, the elicitation of student opinion problem notwithstanding.

But for the rest of us more ordinary souls, there really is scant information in these numbers. Further, it is known that these results are manipulable via the instructor grading scheme and, indeed, grade inflation may be a direct consequence of that. Conversely, grade inflation reduces the information content of these evaluations.

One could evaluate the instruction by looking at student performance, in down the road courses or in the course being taught via artifacts the students produce or via some survey done after the course has been complete where the institution makes clear to the students that their opinion is being sought for the good of the institution and not just to prop up a P&T process that otherwise doesn’t know how to consider teaching. My campus does a senior exit survey of this sort and it is an emblem of this after-the-fact type of evaluation.

If we wanted to continue with course evaluation but in a way that accounted for the student’s preference, we’d deliver two different type of surveys. One would be during the middle of the course (where the instructor could react to it and base changes in course delivery on it). This would ask students about what is working for them and what is not working for them in the course. Then at the end of the course the survey would ask whether the instructor made changes in the course according to the needs identified by the prior evaluation. This process, still not perfect to be sure, does provide an obvious reason for providing the evaluation – to affect the quality of the teaching. (The summative assessment offered at the end of the course acts as an enforcement mechanism for instructors to take seriously the formative assessment offered in mid semester. Students, understanding that, now have a sensible reason for doing the summative assessment at the end.) Some instructors, of course, do try mid semester formative assessment on their own, and that is to their credit. But the institution hasn’t given its endorsement to the practice, so most instructors don’t bother.

Another issue that exemplifies agency issues is grading, particularly the grading of final exams or term papers that are turned in at the end of the semester. In particular, consider the comments the instructors provide in that instance. Do they facilitate learning? Or do they merely rationalize the grade the instructor is giving? My guess is the latter and that one would predict the instructor writes more when giving a poor grade than when giving a good grade. This means, of course, that the instructor views grading as a nasty business rather than as an opportunity to teach and indeed when it is the case that all students have written on the same subject, it is quite dull and trying to continue to pay attention to the writing and find the relative merit of one student’s work over another. I know that many faculty hold machine score-able exams in low esteem, but for finals in particular they have the merit that there is none of this agency issue present (and that the results will be returned to the students in a timely fashion).

For those who don’t want to go that route, consider instead having the instructor evaluation of the work occur before the end of the semester (which means the work must be delivered earlier as well) and then having other students in the class review the work in some manner. There are a variety of ways to do this but in almost all of them the role of instructor changes from one of pronouncing judgment on the students to one where they help the students to learn.

Thus, as with the post yesterday, what is considered good teaching practice should be derivable at some level by a consideration of the various agency problems that crop up in teaching and further that by adoption this agency view one can develop a more realistic way to consider modifications in approach based on evaluative evidence gathered while teaching.

No comments: