Thursday, March 22, 2018

What's The Right Systematic Response?

As I watch my friends conjecture about ways to self-protect after the latest Facebook debacle - switch to a different platform, change one's identity, be online but don't have any friends - it occurs to me that none of this can work and that, indeed, acting individually a person can't solve this problem.  Some collective action is needed instead.  What sort of collective action do we require?  I, for one, don't like to provide answers until I understand the problem.  Providing problem definition is work, requiring real effort.  The point of this post is to begin in this direction.  I hope that others who read it might then kick the can a little further and make progress that way.

The Myth of the Closed Container

The heyday for me as an individual contributor online was late 2005 through 2006.  I had discovered the world of blogging and in the .edu space was considered a serious contributor.  It was out in the open, seemingly anyone could participate, and the self-forming community of participants engaged vigorously, by commenting on the posts of others as well as by linking to those points in their own pieces.  At the time, at least within the .edu arena, there was some loathing for closed container solutions, particularly the learning management system.  An early exemplar is this post by Leigh Blackall.

While blogging of this sort still exists today, it is now in eclipse.  If you consider the point of view of the platform providers, blogging overall didn't generate enough participants to be very profitable.  There needed to be a way to turn up the volume.  Here we should ask why the volume wasn't greater.

My experience with online goes back a decade earlier, when we had online conversations in some client/server software that enabled threaded discussions and later by email via listserv.  Based on that experience I believe the following is safe to maintain.  Among the members of the group, a few will do the bulk of the posting, feeling comfortable expressing their opinions.  The vast majority will be lurkers.  It is much harder to know the behavior of lurkers, but I suspect some were careful readers of the threads yet never jumped into the fray, while others may have been more casual in their reading.  A critical question is this?  Why doesn't the lurker chime in?  Two possibilities are (1) fear of criticism by other members of the group and (2) self-censorship about the quality of one's own ideas.   In this sense, even though these threads were in a closed container application, they were still too open to elicit universal participation.

People will open up more if they perceive the environment to be safe.  Having trusted partners in communication is part of that.  Having others who are not trusted unable to penetrate the conversation is another part. The issue here, and it is a big one, is that people often make a cognitive error with regard to the safety of the environment.  Email, for example, is treated as a purely private means of communication when there are far too many examples to illustrate that it is not.  (While readers might first think of email leaks as the main issue, people who work for public organizations should be aware that their email is subject to FOIA requests.)

Faux privacy may be its own issue.  If true privacy can't be guaranteed broadly, it may make sense to have very limited means of true privacy that are safeguarded to the max, with the rest of the communication semi-public.

With regard to Facebook in particular, there is a part of the software design that encourages the cognitive error.  This is about how somebody else becomes your friend in Facebook.  Is that somebody else to be trusted?  If they are a friend of a friend whom you do trust, is that sufficient for you to then trust this potential new friend?  If your set of friends is uneven in how they answer these questions, how should you deal with them?

Out of sight is out of mind.  You may very well consider these issues when you receive a friend request. But if you haven't gotten such a request recently those issues fall by the wayside.  Then, when you make a status update and choose for it to be available to friends only, you feel secure in saying what you have to say.  That feeling of security may be an error.

That sense of security may then impact what you click on (which we now know is being scraped to develop a sharper profile of you).  If, in contrast, you felt like you were being watched the entire time, you would be more circumspect in how you navigate the Facebook site.  So, odd as this may sound, one answer might be that all Facebook posts are publicly available.  Knowing that, the cognitive error is far less likely to happen.  Of course, that can only work if that becomes the norm for other platforms as well.  In other words, perhaps some sort of return to the blogging days would be right.

Micro-blogging might be considered from this angle.  It clearly has been more successful than long-form blogging in generating volume.  Part of that is the consequence of tight character limits.  They reduce the perceived need for self-censorship and instead create the feel that this is like texting.  Yet we should ask how many people who are Facebook users don't themselves do micro-blogging.  That's the population to consider in thinking through these issues.

The Myth of Free Software

Way back when I was in grad school, I learned - There's no such thing as a free lunch.  Although I'm not otherwise a big Milton Friedman fan, I certainly subscribe to this view.  Yet users of software that is free to them (meaning it is paid for by others) have grown used to that environment.  We are only slowly coming to realize that the cost of use comes in other ways.

“It don’t cost no money, you gotta pay with your heart.”
Sharon by David Bromberg 

With ad supported software, in particular, we pay by putting our privacy at risk.   While it is clear that some will call for regulation about how software companies protect the information about us, let's recognize that the incentive to collect this information will not go away as long as ads are the way to pay for the software.

So, one might contemplate other ways to pay for the software, in which the incentive to collect personal information is absent because there is no profit in it.  The most obvious alternative, at least to me, is to retain the free access to to the user (the paid subscription alternative ends up limiting users too much so does not sufficiently leverage the network externalities) and thus pay via tax revenues.  This would be in accord with treating the software as a public good.  Taxes are the right way to fund public goods.

How might this work?  If a municipality or some other jurisdiction provided access to some software for its members, the municipality would do so by writing a contract with the provider.  Members would then log in through the municipality's portal and be presented with an ad-free version of the software.  I want to note that this is not so unusual as a method of provision.  My university, for example, provides eligible users - faculty, staff, students, and in some cases alumni as well - with free access to commercial software, for example Box.com and Office 365.  So the market already has this sort of model in place.  The only things that would need to adjust are that it would be municipalities or other jurisdictions that do the procurement and they would need to provide front ends so that members could have access but non-members would not. The online environment, then, could be without ads for members but would still have ads for non-members.

Part of the agreement and what would rationalize such procurement by the municipality is that the provider agrees not to scrape information from members of the municipality.  It is this item in the contract that justifies the public provision of the online environment.  In other words, people pay with their taxes to protect the privacy of members of their community.

This is obviously a tricky matter, because if I live in such a community that provides access and one of my friends is using an ad-supported version, wouldn't my information get scraped anyway, just because of that?  There are two possible answers to that question which are consistent with protecting my information.  One is to divide platforms into those that are only paid for by the various municipalities, so no user is in the ad-supported category.  The other is to (heavily) regulate how the information of users who don't see the ads gets collected.  Each of these pose challenges for implementation.  But do remember there is no free lunch, so we need to work through which alternative is better, rather than cling to an idealistic vision (total privacy protection coupled with no intrusion) that is actually not feasible.

Policing the Online Environment - News, Fake News, and Ads

One reason to note my own usage of the Internet from back in the 1990s is to mark the time since.  We have been in Wild West mode for those two decades plus.  We probably need something more orderly moving forward.  What should that more orderly something look like?

An imperfect comparison, which might be useful nonetheless, is driving on the Interstate.  As there is a general preference to drive faster than the speed limit, most of us would prefer at an individual level that there were no highway patrol.  On the one hand, that would be liberating.  However, we also care about the reckless driving of others and would prefer to limit that, if possible.  The highway patrol clearly has a role in that, as does the fine for speeding and how auto insurance premiums are impacted from getting a speeding ticket. The system tries to balance these things, imperfectly as that may be.

In the previous section where I talked about municipality access, a part of that is members of the municipality not seeing paid-for content.  Is that consistent with the software provider's incentives?  Think about the contract negotiation between the software provider and the municipality.  What will determine the terms of such a contract?  Might usage by municipality members be a prime determinant?  If so, the provider has incentive to jimmy up usage and might use salacious content for that purpose.  As with the speeding on the highway example, an individual user would likely gravitate toward the salacious content, but might prefer that other users do not, to preserve the safety of the environment.  One would think, then, that some form of policing would be necessary to achieve that control of other users.

Speeding is comparatively easy to measure.  Determining what content is suitable and what content is not is far more difficult.  One possible way out of this is for the provider to block all content from non-friend sources. Subject to an acceptable use policy, users themselves would be able to bring in any content they see fit via linking (for example, I'm linking to certain pieces in this post) but for the software provider to be out of the business of content push altogether.  Then, the policing would amount to verifying whether the provider stuck to that agreement, plus the monitoring of users who are actually trolls instead.  Another possible way is to generate an approved list of content providers and to only accept content from those providers on the list, perhaps with users opting in to certain content providers rather than giving the ability to the software provider to arbitrarily push content at the user, but then allowing the software provider to retain the ability to push certain content. 

A point to be reckoned with here is that self-policing by the software provider is apt to fail, because the incentives aren't very good for it.  But, on the flip side, we don't have a good model of effective yet non-intrusive online policing in broad social networks.  (In narrow cases, for example on my class site, the site owner can function in this policing role.  I tell my students I will delete those comments I find inappropriate.  I can't ever recall doing that for a student in the class, but since I use a public site I can recall deleting comments from people who weren't in the class.)

The concept of online police may be anathema to those of of us weaned on the mantra - the Internet should be free and open.  Wouldn't policing be used to suppress thoughtful but contrary opinion?  Before answering that questions, we should ask, why there isn't more abuse by traffic cops.  They largely do the job they are paid to do.  If that system works, more or less, couldn't some online analog work as well?

Wrap Up

Some time ago I wrote a post called Gaming The System Versus Designing It, where I argued that we've all become very good gamers, but most of us don't have a clue about what good system design looks like.  There is a problem when a large social network provider operates with a gamer mentality, even as it is providing a public good on a very large scale.  We need more designer thinking on these matters.  In this post, my goal was not to provide an elegant design alternative to the present situation.  I, for one, am not close to being ready to do that.  But I hope we can start to ask about those issues a good design would need to address.  If others begin to consider the same sort of questions, that would be progress.

Thursday, February 22, 2018

More Thoughts On Campus Strategic Planning - Five Years Later

Not surprisingly, the campus is again engaged in a strategic planning exercise.  It's something universities do periodically.  I'm going to chime in, as I'm prone to do.  Like the last time, when I wrote Some Thoughts On The New Campus Strategic Plan, I'm going to restrict my attention to the teaching and learning part.  Unlike the last time, however, there are fewer documents to react to and thus I'm going to venture out some on my own more about how I see the issues.  (Perhaps we're a bit earlier in the process.  Of that I'm not sure.)  The overarching theme for teaching and learning has remained the same - Provide Transformative Learning Experiences.  So my post from five years ago might still be a relevant read now as it dissected the then documents about the discord between the goal (which I subscribe to) and the purported measures (which, in fact, were inappropriate for determining whether the goal had been achieved). There was additional analysis in that piece as well that is likely still relevant for the present.

But I don't like to be (too) repetitive in making my posts, so I will aim differently here.  There was a PowerPoint presentation produced by Kevin Pitts, the new Vice Provost for Undergraduate Education.  The first slide after the title slide does a recounting of initiatives in this category since the last strategic plan was implemented.  My attention was caught by the fact that the word major (used before the expression - educational initiatives) was in bold.  Were these initiatives indeed major or was casting this word in bold mere hype?  My instinct was to consider the initiatives from the perspective of the one undergraduate class I now teach each fall - The Economics of Organizations.  Do the initiatives listed on the slide matter for that class (which is taught in DKH where the Economics Department is located)?  My conclusion was that they do not.  Do they matter for students who are Econ majors?  My conclusion was that for current majors probably not, while for future majors perhaps a little, but not much.  With that bit of observation complete, I started to generalize.  We tend to innovate at the edges in instruction while leaving core business practices largely intact.  Maintaining that belief about where innovation occurs is my bias on these matters; that bias informs the rest of the piece.  The underlying question that I'd like to get at is this.  Might we innovate on core business practice in a way that actually improves matters?

There are five sections in what follows: 1) Optics, 2) Benchmark Measures, 3) Ethical Issues, 4) Suggested Reforms That I've Developed In Prior Posts, and 5) Conclusion.  The first three of these follow immediately from what I've written in the previous paragraph.  The fourth comes from responding to the bottom of slide 6 and the last slide in Kevin Pitts' presentation.  He asks about redesigning college education from scratch.  What would that look like?  He encourages the reader to "think big."  In fact, I've done this (in concept though not in implementation outside my own teaching) with some frequency on my blog, so I will link to some instances of those exercises and provide some annotation along with the links.  The final section is meant as a way to connect the dots.

1. Optics

Any SWOT analysis will have parts that are elevating, the strengths and opportunities, and parts that are discouraging, the weaknesses and threats.  For the analysis to be effective, all parts must be considered in full.  When I was in the campus IT organization, as the Assistant CIO for Educational Technologies (2002-06), we did such planning sessions off site and behind closed doors, with higher level management and senior staff in the organization. Presumably, this was to promote open and honest dialogue among the group in attendance.  The current strategic planning process is meant to be fully out in the open.  Idealistically, such open debate should be encouraged.  But one wonders whether, in practice, that means the weaknesses and threats will be soft pedaled, or ignored entirely.  At one level, this could happen simply because we don't have enough mental bandwidth to do otherwise and because in terms of goodwill the campus possesses, that has been exhausted on other matters - all things pertaining to Chief Illiniwek, the pending GEO strike, and lingering effects from the Steven Salaita matter.  There is the further concern that when done in the open the press gets hold of the issues, does its own spin on those, and this serves to reframe the debate.

To illustrate the last point I want to turn to a post I wrote a couple of months ago, The discord between how the U plays in the press and what is actually happening on the ground, which offered a critique of a column written by Frank Bruni of the New York Times.  It was/is my view that Bruni has elevated certain issues about free speech and political correctness, at the cost of entirely missing the more important learning issues that happen in many if not most classrooms.  Below is the last paragraph from that piece:

Let me wrap up.  The freedom of speech issue, as it pertains to Higher Ed, usually seems to be about discussions of our national politics and whether those happen on our campuses with both the liberal and conservative view represented in the conversation.  While that may be interesting to readers of Bruni's column, it really is a tertiary issue on campus.  The fundamental issue is about what students are learning and whether they are learning in a deep manner.  We actually don't have freedom of speech on this front, but it is not because of censorship.  It's because of the current business model of universities, which are so reliant on donations and tuition.  For both, it is believed necessary to promote a nice shiny view about what college is about, at least that is the belief by those in charge of marketing the university.  So there is discord between those marketers and the people on the ground, students and instructors.  I wish Bruni would write about this.  That might actually help to improve matters.

So, in my view, there is a very real issue of whether it is possible to do a credible SWOT analysis of undergraduate education, or if the marketing people will end up blocking it, and indirectly people like Kevin Pitts who are from the Provosts's Office will block it too, because each of them wants to be a team player.  It seems to me that until this optics issue is embraced squarely, those of us who are part of the campus community but not in the Provost's Office should have very low expectations of what will come out of the strategic planning process - cheer-leading but not real change.  However, we are living in a time of much social upheaval.   Perhaps that will create a positive spillover effect  for us in considering undergraduate education, and thereby serve as a credible counter to business as usual.  

One reason to make this post on my blog rather than in the comment box on the Strategic Planning Web site is that I am entitled to my views so can raise some contentious issues.  Further, I normally get very few readers these days, so this likely will not cause a broad airing of the issues.  If I can bend one or two heads, I've reached my goal.  Then, if they care to do so, they can pick up the baton and take it from there.

2.  Benchmark Measures

When I first joined SCALE, in spring 1996, I did a project on student retention rates on campus, comparing the 10 day enrollment numbers to the final enrollment numbers in all undergraduate classes.  (The data were provided by DMI.  At the time the campus wanted to be very cooperative with the Sloan Foundation and retention was something Sloan was interested in.)  The upshot was that retention by this measure was quite high, though it was lower for College of Engineering classes (more on that in the ethics section).  But there was another quite interesting lesson from the exercise.  The size distribution of classes was highly skewed.  (I'm doing this from memory so I may be slightly off with the facts, but the picture I'm sketching is pretty close to what the finding was then.)  There were about 1500 classes overall.  About 30 of them were super large, and accounted for about half of all enrollments.  Our course numbering scheme was different then, but the upshot was that the bulk of the large classes were at the 100-level.  The only 300-level course among the giants (in excess of 800 students per semester) was intermediate microeconomics.

For these very big classes, also for the next sized down classes, it is useful to know how much human instructional resource (FTE faculty, graduate teaching assistants, and other human resources) are deployed to get some sense of a student to instructional resource ratio.   This is the course-level analog to the student-faculty ratio that many college guides publish.   At one extreme, is it only big lectures with no discussion section?  At the opposite extreme, is it only many small discussion sections and no lecture?  Or is it something else?  At the time, intermediate microeconomics was taught mainly in amphitheater classrooms with about 60 students per section - so there were a lot of sections, but they weren't nearly as small as in the introduction to rhetoric sections or the sections in the first semester of Spanish, where there were about 20 students.  I don't have the picture for what this looks like now, in general, but I know nowadays that intermediate micro is taught in the large lecture hall in DKH, with about 200 students per section.  

It is also useful to know not just which courses are large but also which students are taking those courses.  When I started at Illinois back in fall 1980, freshman paid lower tuition (I believe there were three tiers then with sophomores paying a bit more and juniors and seniors paying still more).  This practice was justified by the observation that freshman were taking the large classes, those entailed less labor intensive instruction, so the expenditure on instruction per student was less.  We have since gotten rid of this particular practice for differential tuition.  Is it still true, however, that the large classes are mainly taken by freshman?  

If so, we really should think that through.  Any theory of human capital development will focus on the benefits of making investments in human capital early, so those investments can bear fruit over a longer duration.  This is the logic behind early childhood interventions for low income students.  The same principle should apply to college students.  Yet the practice at big public universities is to do something of the opposite, or so it would seem.  Having the data on this would be useful for considering the matter in depth. 

A related issue is whether the practices of the university perpetuate differences in income inequality among the families of our students.  For example, we should understand how prior college credit from AP classes is distributed among the student population.  Do students from wealthy districts earn a good chunk of their gen ed credits while still in high school?  If so, do they bypass some of the high enrollment classes, just for that reason?  If students from poorer school districts don't have the same opportunities for advance placement courses, do they get less expensive instruction when at the university, because they have to take more high enrollment classes?  Again, having the data on this would be useful.  

It would also be useful to look at these sorts of questions for courses in the major.  How do popular majors compare with less popular majors regarding class size?  Does the answer depend on which college the major is located in?  

A second set of issues to investigate, hence requiring other data to provide benchmark information, is about tenure-track faculty teaching undergraduates versus adjuncts teaching undergraduates.  In this, think of the tenure-track faculty as the bosses who set department policy, while the adjuncts are the hired help.  Back in 1980, the standard teaching load in Economics was 2 courses a semester and the expectation was that one of those would be a graduate class and the other an undergraduate class.  Just about every full-time instructor then was on the tenure track.  Faculty clearly preferred to teach graduate students, so they could better tie their teaching to their research.  Undergraduate teaching was deemed service work.  There were exceptions to this rule, to be sure, but the rule gave the then current ethos.

What is the ethos now?  How does that vary from department to department?  A general thought is that if the tenure track faculty largely are teaching graduate students only, serious reform of undergraduate education will be given short shrift by that department.  Even if at the campus level changes are desired, they won't be implemented in such departments or will be implemented in a halfhearted manner. A related matted is how connected the adjuncts are to the tenure track faculty.  My sense is the two groups are largely separate.  Further, while the tenure track faculty have something of a community within their departments, the adjuncts are more autonomous and many of them are not plugged into the campus support community for instruction, even though teaching is their full-time activity.  It would be good to bring evidence to bear, to determine whether that perception is accurate.  If it is right, how might reform then be implemented?  Does the entire culture need to change to get even modest changes in teaching practice?

One last area to benchmark would be course grades.  Aggregate distributions could be published, by department, sorted by 100-level, 200-level and so on,  then sorted by college, etc.  The aggregation would need to be sufficiently large so that individual instructors didn't feel compromised by the practice.  Let's say that it is possible to respect individual instructor privacy in this way.  Then the idea would be to provide information that speaks to George Kuh's Disengagement Compact.  If grades were not mutable, high grades would be indicative of high performance and low grades of the converse.  But might it be that on campus low grades signify intellectual rigor maintained in the course and high grades the opposite?  The tradition has been for grade distributions in most classes to be the instructor's prerogative, while in a few high enrollment classes with multiple lectures and common exams, there is an agreed-upon grade distribution imposed ahead of time.  (Or it might be that the course coordinator imposes the grade distribution and doesn't seek the assent of the other instructors.)  Adjunct instructors, in particular, need to get tolerable teaching evaluations to secure their employment.  This impacts not just grades, but how the course is taught.  (I fear there is a lot of teaching to the test.)  Have we addressed this issue on campus or largely ignored it?  I would argue the latter.

3. Ethical Issues

I'm going consider the ethical issues through the lens of inequality - haves versus have nots.  This distinction is sometimes rendered geographically, north of Green (College of Engineering) are the haves while south of Green are the have nots.  But that distorts things in certain ways.  There are STEM departments south of Green and many of them are haves as well.  Also, specifically from the perspective of undergraduate education, the College of Business is one of the haves. 

With that, here I'm going to focus on two distinct practices that we should question from the perspective of a Campus strategic plan, rather than separate college-specific strategic plans.  The ethical dilemmas arise in the presence of inter-college exchanges that work less well than they should.  One of the practices is the college-specific tuition surcharge.  Both Business and Engineering have higher than the base tuition.  LAS, in contrast, does not have a tuition surcharge.    

The other practice is about students changing their major by transferring from one college to another.  Some of these transfers are motivated purely by intellectual interest.  The student took an intro course that resonated and now wants a different major.  It is different for those students who start out in the College of Engineering.  This is the one college on campus that seems to deliberately pursue an attrition strategy for students in the first and second year.  Many students find Engineering education brutal and non-nurturing.  Some survive the ordeal and then treat it as a badge of honor.  Others become quite discouraged and want out.  Most stay at the university but transfer to another college.  This past semester in my Econ of Organizations class, 10 out of the 25 who finished the course had transferred from Engineering.  As they blog about their campus experiences to tie into course themes, I can report that they were not prepared for this to happen and were quite disillusioned thereafter.  In my framing of things, Engineering makes a mess with these students and then leaves it to others to clean up the mess.  

Elsewhere on campus you don't see the attrition strategy broadly applied.  In fact the Campus is under some pressure to increase graduation rates.  President Killeen has argued that as a goal in his appeal for greater support from the State of Illinois.  If the Economics department as a whole deliberately embraced an attrition strategy, where would the students go?  My guess is that many would not do an internal transfer but would leave the university entirely.  The Campus would count that as a black mark.  If that is right, we really should be reconsidering the internal transfers out of Engineering and how to make these students whole again and/or Engineering needs to adopt an approach that students find less punitive.  Tying this to the previous section, it would be good to have the numbers on how big an issue this actually is.  Until this past semester, I was under the impression that it didn't happen so often.  Now I'm less sure of that. 

Economics is an odd major on our campus because many of the students aren't really interested in the subject matter.  A good chunk, however, are College of Business wannabes.  But College of Business has restrictive admission; it requires a high standardized test score for admission as a freshman or a high GPA during the freshman year to be an internal transfer.  Students who are Econ majors often can't get over those bars, but they can imitate the Business major to a certain extent.  Many, indeed, do a Business minor.  The Business minor is another flashpoint where the ethical issues manifest.  

As I was an Associate Dean in the College of Business from 2006-10, I have seen these issues on both sides.  The minor does not enhance the reputation of the College of Business.  And, historically, minors have been underfunded across the board.   (I'm ignorant of the present situation.  But I'm guessing while it may be better than 10 years ago, there are still issues with it.)  At the time I became an Associate Dean, the number of Business minors were kept down.  Partly to get some goodwill with Campus at the time BIF came online, the minor was expanded soon thereafter. But the quality of the offerings in the minor courses has been mixed at best.  Some of my students from last semester told me as much.   In my own teaching, I was trying to send a message to my class during much the course - they should value their college experience as a thing in itself as much as it is a passport for later.  That proved a very hard sell for those students in the Business minor.  

I will add one other point on this, about my younger son, who is a U of I graduate now, having graduated after the spring 2016 semester.  He started off in ECE.  After one year he wanted to transfer to CS, but didn't get in then.  So he transferred to the Math Department, which has a CS program.  He paid the College of Engineering surcharge for that.  A year later he did transfer into CS.  The Math Department intermediate step worked, in part, because the tuition surcharge was part of the deal.  Why does Math have students pay this surcharge to Engineering, but Econ doesn't have the surcharge paid to Business?  It looks like a historical accident and/or sidebar negotiations determined these practices rather than a sound ethical approach.  Because I'm aware of this inconsistency, it's very hard for me to believe that the students' best interests are what determines these matters.

I want to note that the above is meant purely for illustration.  The ethical issues are far broader.  Even in-state base tuition and fees are now far higher in real terms than the tuition my parents paid back in the 1970s for me to attend Cornell (Arts and Sciences), an Ivy League school.  Further, tuition as a share of overall university revenues is now much higher than it was when I started back in 1980, when State of Illinois Tax dollars provided the bulk of university revenue.  These twin facts, which are similar at many other public R1s, put the university in a squeeze.  The faculty culture that I experienced didn't give the ordinary undergraduate a prominent role to play on campus.  But in an ordinary business sense, a functional enterprise needs to put is resources in places that encourage subsequent revenue production.  If tuition is the university's meal ticket, the student experience must be a good one.  The have colleges seems to be doing that.  The have nots, not so much.

4) Suggested Reforms That I've Developed In Prior Posts

I've learned a bit about how to promote my ideas over the years.  Rather than refer to labor-intensive teaching, a label that makes sense to an economist but that other instructors might find offensive, I've come to call it high touch instruction, with a focus on the emotional aspect, a student who has been touched because the student's instructor evidently cares that the student learns.  Each of the suggestions below are about different aspects of high touch instruction.   I will present these in reverse chronological order.

This one is quite recent.  I wrote it last week.
The Freshman Seminar - Taught By A Retired Faculty Member
This posts suggests to counter the large-class syndrome, offer a freshman seminar taught by a kindly grandparent-like figure, conjuring up images of Mr. Chips, if you will.  Specifically, I juxtapose the pedagogic goals I have for The Economics of Organizations class that are not specific to that content of that course but instead pertain to getting the students to better understand their role as learners, with the needs of freshmen students.  The further thought is that retirees are time abundant, which is necessary for this type of teaching, so would be more willing to do it as long as they got some recognition for the effort.  Thus it might be possible to do this without breaking the bank.

This next one is actually a series of six posts that comes under the tag, Everybody Teaches.  The link is to the first post in the series.  The link to the tag, which has all six posts on one page, can be found near the bottom of the post.
Everybody Teaches
At the time of this writing, the Campus was gearing up its relationship with Coursera and, in my view, going a bit MOOC crazy.  As a possible source of innovation, MOOCs surely are interesting.  But I didn't think they should be the only game in town, so I took it upon myself to propose an alternative, one entailing high touch instruction.  Each of these posts are rather long, so together it is a lot of reading.  Also note that the fifth in this series is similar to though not identical to the Freshman Seminar piece linked above, while the sixth in the series relates to the next link.

This last one is another series of posts.  This time there are seven of them on Inward Looking Service Learning (INSL).
Inward Looking Service Learning
The logic in these posts is first that the only possible labor input that truly scales with the number of undergraduate students is the students themselves.  More experienced students can and should be helping students who are newer to campus.  Second, and this point is probably more controversial so would have to be investigated in some depth, peer mentoring is an especially valuable activity for the mentors.  They learn a great deal from the mentoring.  So the mentoring activity literally should be thought of as applied instruction, just as now some students who do internships have it satisfy an academic requirement for doing fieldwork.  Third, large class instruction can have a high touch aspect, if we re-conceptualize the organization structure to give prominence and make formal the discussion group.  Peer mentors should be deployed in small group settings and in one-on-one interactions with other students.  They shouldn't be viewed as cheaper substitutes for graduate student TAs which, unfortunately, is how they are too often deployed on campus now.

5) Conclusion 

I need to critique my own idealism, as represented by the posts I linked to in the previous section.  I base that critique on my experience with learning technology.  In the SCALE days, we did some wonderful things with technology.  Then SCALE morphed in the Center for Educational Technologies (CET) and the mission changed - to bring those wonderful things to the mainstream.  While there was broad diffusion of usage of online tools in teaching, ultimately culminating in an enterprise learning management system (Illinois Compass) the reality is that most of the usage was rather dull and the wonderful things didn't scale nearly as much as the adoption of the technology itself.

I came to realize that innovative faculty and early adopter types of instructors do those wonderful things, through their own sitzfleisch and imagination on how to deploy the technology effectively.  It remains an open question whether more mainstream faculty can produce interesting results as well, if they get suitable encouragement and support.  A related question is what that encouragement and support looks like and whether it is affordable.  I really don't know.

In the 1990s, there were faculty who were far more innovative than I was in deploying the technology for teaching.  Mostly I stole/borrowed ideas from others and then retrofitted those ideas for my course.  But, all modesty aside, I was much more innovative than the majority faculty who were the bread and butter audience for CET.   My confidence in the suggestions in the previous section come from my own experience teaching.  I have had some good results recently with high touch teaching and twenty years ago the use of peer mentors, who conducted online office hours during the evening, was the best part of my technology innovation.   Whether any of that can scale now, I don't know, but I'm hopeful it might.

I want to wind this up with a greater sense of urgency.  The ethical issues I described need to be addressed.  I believe that high touch teaching is at least part of the answer to that.  In other words, because of the nature of public R1s, there will be some large class instruction and some ways students will feel they are being treated more like a number than like a person.  But if they can have other experiences where instructors treat them with decency and where they can see they are being encouraged to learn deeply, that can serve as a suitable counter force. That's the argument I'm trying to make in this piece.

Tuesday, February 20, 2018

Having Enough on the Ball

It's a fact about college admissions at selective universities that the process is a tournament.  When I teach about pay that varies with individual output, as a component to my course on the economics of organizations, we discuss different types of incentive contracts.  Pay for performance, as we usually consider it, is based on absolute performance.  Sales contracts, for example, are based on the dollar amount sold.  Likewise, fruit pickers are paid by the number of bushels picked.  Absolute performance is measured in the units of output, whatever those happen to be in the particular instance.  Tournaments, in contrast, are based on relative performance, how the person ranked compared to the others in the competition.  It is these rankings that determine the winners and the losers.

In a world where the absolute performance distribution is stable among the population, tournaments will produce outliers only if they have a small number who enter and then an even smaller number of winners.  Small samples tend to produce outliers with some regularity.  If, however, the numbers of entrants and the number of winners are both large (although the number of winners obviously needs to be less than the number of entrants), then the law of averages can make you pretty confident about the performance level of the winners.  I believe that admissions at the U of I and other major public universities satisfy these large number conditions.  (There are exceptions to this, such as athletes in the revenue sports, but let's ignore those here.)

There may be more systematic reasons from departures to predictable measures of output.  Geography within the state of Illinois can matter for admission.  A student who is from a comparatively poor rural district or is from an inner city school, in either case the school has fewer resources than a suburban school, may be measured more for future potential than for past performance.   Conversely, students from the affluent district nowadays game the system much more than they did 45-50 years ago (when I was in high school and applying for college).  For example, taking a private prep class to prepare for the standardized tests is quite common now.  It did happen then, but was rarer.  And now there is kind of an arms race about amassing AP credit.  It mattered then too, but not nearly as much as now.

Let me offer two possible explanations for the gaming, both of which have some merit, in my view.  One is the idea of leapfrogging. A kid who probably shouldn't be admitted participates in this sort of gaming to increase the likelihood that the kid will get admitted.  But then, if kids like this game the system, the other kids who are likely candidates need to do it too, as a means of self-protection.  The upshot is that by the established criteria for admission there ends up being more above the bar than can be accepted, so the process becomes something of a lottery.   The other explanation is that the gaming serves as a substitute for the real type of academic preparation we'd like to see (mainly a lot of reading, done broadly and deeply).  Measuring the real academic preparation is difficult, highly subjective, and provides its own sort of moral hazard.  (If it is done primarily by the letters of recommendation that teachers write, and if the high schools are measured by the college placements of their grads, then the teachers will be under some substantial pressure to skew their evaluations upwards.)

* * * * *

The above is meant as background information.  Now I want to get at the heart of the matter.  Over the years I have had a variety of students whom I felt ill prepared for college.  Instructors are supposed to have high expectations for their students, for example see Chickering and Gamson's Seven Principles.   If you think of a course like a pipe, students enter at one end and exit at the other, and if you have have a reasonable expectation about the value added to the student while in the pipe, then it stands to reason that to meet end of pipe expectations the student must bring enough to the table when the student is entering the pipe.  My sense of things is that the fraction of students who don't bring enough to the table has been increasing.  The question is why.

My older son graduated from college in 2014.  When he was in high school, perhaps a sophomore or junior, we made a family visit to the college advisor in the school.  He told us something entirely unsurprising.  The key to getting into a good college is to read a lot, read broadly, read challenging material; in a nutshell reading is the answer.  My operating hypothesis is that kids who develop the reading habit, especially for reading done outside regular coursework, will end up with something on the ball.  They'll be aware of quite a lot and will have started to try to understanding things by their own sense making.  They won't rely on adults entirely to explain what is going on.  My operating hypothesis is that even among the population of students planning to attend selective universities, reading is on the decline.

Recently I've read some screenplays by Paddy Chayefsky, including Network which became a very well known motion picture.  In the screenplay the mad yet all too wise news commentator, Howard Beale, makes a speech decrying that the public doesn't read newspapers.  Instead, they get all their information from "The Tube."  For some reason, I found this comforting to read.  The movie is from 1976.  There was no cable TV then, or it was just starting at that time.  There was no Fox News.  Hearing cries of illiteracy among the public that are from more than 40 years ago ring true and remind us that this is an ongoing phenomenon, not a recent invention.

Yet Beale was talking about the overall population, many of whom watched him when he was delivering his rants on TV.  What about the sub-population of students headed for selective colleges?  Were they different from the rest?  I wish I knew the answer to this.  I know my own experience.  I read the New York Times on a daily basis while in college (and in graduate school too).   I'm guessing that many of my cohort in high school did likewise.  But of the rest of the college population, I don't know.  Were kids from the Chicago burbs different from the NYC kids this way?  I don't know.  Regional differences, cultural differences, generational differences, between them lies some explanation about their reading.  I wish I had more information on it to better provide an explanation.

A concomitant issue is how instrumental students are about their learning in the classes they take.  I wrote about this a little in my previous post. As with the reading itself, this has been an issue for quite some time.  But I believe it has been accelerating, witness the gaming I discussed in the previous section.  And here is the thing.  A kid who is instrumental about his learning won't do much reading at all in his leisure time, because he won't get credit for it.  So leisure time gets packed with other activities, most of which don't stretch the mind in the way reading does.  (There is some argument that video games are potentially educative.  I want to acknowledge the argument here, but note that I'm skeptical about the conclusion, and believe others should be skeptical about the conclusion as well.)

I also want to note that certain reading online - text messages, micro blog posts, and other social media, don't cut it as far as the the type of reading I mean that will make the kid grow intellectually.  I'm certainly not against socializing with friends, sending photos, back and forth with cutesy emoticons and the like.  It's all fine, in its place.  But for producing intellectual development, getting the kids to develop their own sense of taste and worldview, it doesn't deliver.

If this diagnosis is largely correct, what should educators do about it?  And what should parents do?  Let me close with a little speculation.  The key is from pre-school through elementary school.  Reading is first a social activity done aloud, either before bedtime or at other times during the day. At some point the kids must be encouraged to do something similar on their own.  To encourage this, make going to the library a festive activity.  Make sure there are lots of books around the house for the kid to read.  Talk about what the kid is reading during dinner.  We need a carrots and not sticks approach to getting kids to embrace the reading habit.  Once they do, they'll take it from there. It is our job to get them to that point.

Monday, February 12, 2018

The Freshman Seminar - Taught By A Retired Faculty Member

A couple of weeks ago the Econ Department contacted me about whether I wanted to teach my course next fall.  This is much earlier than in past years.  I was told that it is harder to hire retirees now.  A justification for that would need to be written.  And the funds to pay me would have to come from the College of LAS, not the department.

I took this as true but didn't quite understand why it was so.  As I teach an upper level class in the major, the rationale for offering my course, or indeed any special topics class, is to give majors an ample selection of such classes from which they must choose.  (I believe the requirement is 4 such courses.)  When I first started to teach the class, the department had recently experienced an outside review, which argued there wasn't sufficient variety of elective courses then.  Perhaps we've more than caught up since.  But I know the number of majors has been on the rise as well.  (I don't know whether that growth is faster than overall enrollment growth or not, which itself may be an issue.)

The rehiring of retirees under contract, whether faculty or staff, has been an issue for some time, with one of the larger concerns that "double dipping" seems an abuse of the public trust.  There are regulations in place to curb the double dipping.  Earnings under contract can be no greater than 40% of earnings while a full time employee.  I come nowhere close to that teaching this one course.

There has also been a lack of imagination by campus administration in considering how to deploy retirees effectively and whether to do so in an entirely voluntary capacity, if exclusively in a contract capacity, or possibly in some mixture of the two.  As it is now, when I supervise a student in an independent study for credit, or when I mentor a student, or when I engage a group of students in a discussion group that is not for credit, those activities are done on a voluntary basis.  If, however, I teach a course listed in the timetable for which students get course credit, then that is done as contract work, meaning that I get paid for it.   I want to note that there might be some connection between these different activities, and in what I say below I will amplify on that.

I also want to note that in some respects there can be volunteer activity happening within paid work, though this conception requires a different way to conceive of work than we normally do.  A framework that helps in considering this is Akerlof's model of Labor Contracts as Partial Gift Exchange, which I described in this post

Here are the model's basic elements.

There is a minimal performance standard below which the employee will get fired.  There is a performance norm, substantially above the minimal standard, that typifies what workers produce.  The difference between the minimal standard and the higher norm constitutes a gift that workers give to the firm.  Likewise, there is a minimal wage below which workers would quit and find work elsewhere and there is an actual wage above that minimum that the firm pays to workers.  The difference is a gift that the firm gives to its employees.  Gift giving demands reciprocation for it to be sustained.  When that happens all involved feel good about the place of work and productivity is high as a consequence.

The gift part is then a voluntary contribution.  I want to focus on a particular type of gift giving by the employee, where the employee puts in more time at work than is required.  We might say such work is done in a labor intensive manner.  Then I want to consider instruction, in particular, from this perspective.  A course that is lecture based, has auto-graded homework, and multiple choice exams scored by scantron represents the minimal standard.  One obvious reason that this mode of teaching persists (perhaps with tweaks like clicker questions offered during the live class session) is that it economizes on instructor time.  In this approach the only time that the instructor devotes to giving feedback to students is in answering questions during lecture or during office hours.  Further, in many such classes neither of those modes are very active, meaning students don't opt in, so there is very little feedback altogether that students receive.

In contrast, consider instruction where there is both a lot of in class discussion (ergo a seminar) and also a lot of interaction between students and the instructor outside of class, either via mandatory office hours or via written work that students do which receive extensive comments from the instructor.  In either case the student receives substantial feedback on the student's own formative thinking under this alternative.

Why have any instruction in this alternative mode, as it is certainly a more costly way to teach (as measured by the instructor time required, if not the dollars needed to elicit that time)?  It's a question that needs asking and requires a serious answer.  Here I will content myself with one component of a possible answer.  Under this alternative mode the students will, no doubt, observe this additional effort being put in by the instructor.  And it should be clear that the effort is being made on behalf of the student's learning.  They should, therefore, become convinced that the instructor cares about them both as students and as human beings.  One of damning things said about undergraduate education at Big Public U is that students become anonymous and in the process become convinced that nobody cares about them.  Labor intensive teaching should aim to counter that.

* * * * *

I'm going to switch gears now and talk about the course I've been teaching as of late - The Economics of Organizations, which is an upper level course in the major.  It is my experience from teaching this class that motivates what I write here.  I want to consider the following issues:  (a) my motivation in teaching, (b) what I've learned about the students as a consequence of teaching this course repeatedly, (c) a little on the methodology of teaching, and (d) mentoring or discussion groups that happen with students who have previously taken this class.  In considering each of these I will try to connect to how it is relevant for teaching a freshman seminar.

I teach now, admittedly, because it gives some work focus to my life during the fall.  The money is part of it, but only a small part.  There are certain aspects of teaching, particularly grading, also some of the administrative/clerical work, that I dislike.  As I've commented elsewhere, not completely in jest, if I could get rid of the disagreeable elements I would teach entirely for free, as a volunteer activity.

This gets to the real reason for teaching.  I want it to matter somehow to the students.  When I can see that the teaching does matter, it is enormously satisfying.  When it evidently doesn't matter, I find it quite disappointing, sometimes even distressing.  There are also many intermediate situations where it is not evident whether the teaching matters or where it is not evident how much it matters.  In these cases I want to know what I can do to make the teaching matter more.

For my efforts to matter to the student, the student must care about the course.  When students do care, my batting average for mattering is pretty high.  However, perhaps surprisingly, especially since my course is an upper level class in the major and the subject matter is much more relevant to "the real world" than most other economics classes, plus I put considerable effort into the teaching, many of my students, roughly one third of the class last semester, don't seem to care much at all.  This offers something of a puzzle, at least to those of us approximately my age, who don't recall experiencing something like this when we were college students.  In contrast, if you ask those students who evidently do care about the course to discuss their experiences in doing group work for other courses,  invariably they will tell stories about teammates who shirk by free riding on the efforts of the other group members.  They've had sufficient history with group work that the existence of students who don't care about the course would come as no surprise to them.

In either case, it is worth considering explanations for the lack of engagement, so to ask whether anything can be done about it.  Let's consider two possible explanations, one social, the other "academic alienation."  The social explanation takes the student perspective as something like this.  Once the student has graduated the student expects to work quite hard on the job, whatever that is.  So the student, perhaps a bit myopic when reckoning with the situation, wants to have some fun now while that is still possible.  Further, there may be some indirect consequence.   Initially this is from living away from home and not having parents to answer to.  That freedom from adult oversight is apt to result in putting social activity front and center.  Afters a few years it is from having 21 be the drinking age, where when I was in college it was 18.  Once the kid is "legal" that would seem to give further license to the have-fun-now mindset.  It's hard to tell if this is true or not from an instructor perspective.  But I think it is fair to say that any regulation has direct consequences that are intended and then indirect consequences that aren't intended.  Kids drinking during the middle of week, and that perhaps encouraging them to skip classes, may be one of those unintended consequences.  I don't believe it was much of an issue when I was in college and I don't know how much this is a driver for lack of student engagement now, but to the extent that it is I don't see the University having much ability to do anything about it.

Now let's consider academic alienation as an alternative explanation.  The story goes something like this.  The students have been involved in a paper chase for quite some time before college, certainly during high school, maybe well before that too. The paper chase is not particularly nurturing of the student, neither intellectually nor emotionally.  So you might expect the alienation to manifest earlier and in some cases it does, but often it is forestalled.  The counter force is that the students were good at the paper chase in high school, witness that they got into the university where they wanted to attend.  They take pride in their academic achievement, which is measured by the trappings (GPA, number of AP courses taken, etc.) rather than by some internal meter of intellectual satisfaction.  This blows up on them, however, once they've gotten to college.

The pond has gotten a lot bigger and now they are just an average fish, maybe even a small one.  The classes in the first year tend to be large lectures and are rather impersonal.  There is grading on a curve, but even with that in some tough classes the raw scores on exams are quite ego deflating.  There is seemingly an excessive number of rules to manage all of this.  The rules make things more impersonal.  While it is possible to have human interaction with somebody in the know - e.g, by going to the TA's office hours - it is psychologically difficult to do this, especially at first, so the tendency is to avoid those interactions.   There is then a negative feedback loop at play.  It is just too tough emotionally to bust a gut studying for a test only to produce a mediocre performance.  So this encourages procrastination on the schoolwork and more time partying in the dorm.  In turn, the student stops caring so much about getting good grades.  That's a means of emotional self-protection, which in that limited case makes sense.  But it is the beginning of the end of the student caring about school.  By the time I see the students, when they are juniors or seniors, they've reached the other end of the tunnel.  The freshman seminar might then offer students a different experience at the get go. If the student had a positive reaction to the seminar, the student might then not fall into the negative feedback loop, in spite of the environment described earlier in the paragraph.  So the freshman seminar might hinder some of the academic alienation.

Before turning to those students who do care, let me note that there are other possible explanations for lack of student engagement.  Indeed, last week the Chronicle featured a series of pieces on student anxiety and depression.  I was struck reading one of those pieces how an instructor or the student's classmates might confound student anxiety for lack of engagement.  Not being a psychologist, I have no recommendation that can provide a proper sorting.  I simply want to note here that if a student is already suffering from deep anxiety as an entering freshman, a seminar might very well be ineffective to alleviate even some of the symptoms.

Let's turn to the paradox of the students who care.  These are students who are quite diligent about school, as measured by their attendance in class, the timeliness in which they submit course work, and the evident effort they show in the work they submit.  Yet for the the most part they seem unaware that their efforts are producing surface learning only.  An expression I first learned from Timothy Luke about 20 years go that at the time sounded like mere formalism but now seems to me prescient diagnosis is that the students are extremely instrumental about their studies.  School is a means to an end; learning is not an end in itself.  Grades are what matter.  Improvement in critical thinking is not on the radar.  The paradox is twofold.  Why is their so much instrumentalism?  How do the students sustain their own diligence in the presence of this instrumentalism?  I will note here that I mentioned Tim Luke to observe this was going on in the late 1990s, well before No Child Left Behind and the Accountability Movement that took hold.  These may very well have exacerbated the problem, but they didn't cause it.  We should ask what did.

Here's a simple and perhaps simplistic explanation.  The students who care, at least the ones whom I see in my class nowadays, were "good kids" when they were growing up.  They obeyed their parents and tried hard to make their parents proud of them.  Perhaps through elementary school parents can tell if their kids are learning in school by talking with them at the dinner table and/or by reviewing their homework.  Soon thereafter, however, the parents who themselves are not educators are unable to directly determine whether their kids are learning  Good grades then become a proxy for whether the kid learns.  Further, to the extent that schools have tracking, or there are better schools that have restrictive admissions, the parents then will push the kids to get good grades as the necessary hurdle to overcome to get into the privileged learning setting.

There has to be a different part of the story, however, which I don't doubt but which I find harder to explain.  The students have become expert at memorization via all the drill they had in spelling and in mastering basic arithmetic that they experience in elementary school.  They keep applying memorization thereafter, even though they really need a different way to learn to understand more sophisticated ideas.  The alternative would be explorative, learning by discovery.  Students would play with ideas and then try to make sense of them.  Yet most students I see don't seem to know how to do this.  I attribute this lack of skill to not reading sufficiently and/or not reading things that are intellectually challenging and that would foster the kids' intellectual development.  I'm pretty sure this is right.  The part I can't explain is why this doesn't happen, at least early on, say in middle school.  Some have argued that reading can't compete with video games or texting friends.  I don't buy that.  I watched a lot of TV, yet I also did a lot of reading.  Others argue that there is too much homework now.  Maybe that's right.  In any event, I believe the path to learn how to self-teach can be found through reading and that most students don't find that path on their own.

So a real reason for the freshman seminar is to expose students to an alternative to memorization and do so early enough in their college careers that it might impact how they proceed in college thereafter.  I should note, however, several caveats.  First, the rote approach gets more and more ingrained the longer the student uses it without trying other alternatives.  So it might really be better to consider the freshman in high school seminar rather than the freshman in college seminar, for just this reason.  To this I'd say one step at a time.  If retired faculty are the ones doing the teaching, it makes sense that this happens in college, not in high school.  Second, for anything that is really new to learn, we'll be novices and need substantial practice to get better at it.  One seminar should be thought of as an introduction only, nothing more.  If students still care a lot about their grades, as they surely will,  trying the new approach across the board might be deemed too risky.  Perhaps students should do it in one course per semester only (one where they are apt to have an inherent interest in the subject) or maybe only in their pleasure reading.  Third, to embrace any new approach as an opt in rather than because it is required, the new approach must somehow resonate with the participant.  In other words, the freshman seminar has to click with students in a unique way, if it is to have a positive effect.  If it is seen as just another course, it will be entirely ineffective.

Let me turn to teaching method.  While my course has many different components, I want to focus on the the weekly blogging that students do.  I should note that this activity in no way follows from the economics training I received.  Indeed, I've had no formal training in writing after high school (indeed I don't recall much training on writing during high school) and only one WAC seminar (WAC stands for Writing Across The Curriculum) on how to teach with writing.  My own blogging was done via self-discovery.  The writing met some needs I had at the time to get ideas going around in my head out of my system.  Though I do it less frequently now, those needs persist.  The idea of teaching with blogs I got from Barbara Ganley, after which I learned that many other instructors did it as well.  I have since fitted the approach to my own inclinations.

The students get a prompt from me that ties into the economics topic we are studying.  The vast majority of students write to the prompt, though they have the option of writing on their own chosen theme, as long as they connect it to the course.  There is a 600 word minimum requirement.  Initially that may be daunting as is the blogging itself, which happens on a publicly available Web page.  Each student is assigned an alias to write under, so the concern that others outside the class will monitor their posts gets addressed that way.  Still, it is quite different for them as compared with other homework they do, so it takes about a month before students get comfortable with the activity. For the students who are diligent, the word requirement stops mattering by then.  For those who don't care, it is evident that some of them are just going through the motions by the brevity of their posts.

I respond to each post, typically with several paragraphs.  I have learned to focus on addressing what the student says, often zeroing in on a particular point that was made.  I then ask derivative questions, suggest possible related issues for the student to consider, and make a point of mentioning it when I don't understand something from my reading of the post.  The student is supposed to respond in kind.  The good students do that.  It makes for an interesting back and forth.  In this, I think I've been aided by my experience as an administrator.  I respond to my students like I would have responded to my staff.  There is a tendency to teach the subject rather than teach the student.  My administrative experience helps to counter that tendency. 

I don't grade individual posts.  I track them to let students know that I've seen what they wrote.  I use portfolio grading instead.  There is one grade given at mid semester for the first half of the blog posts and another grade given near the end of the semester for the second half of the posts.  While I would like to see improvement in the posts over the semester, mostly I'm hoping that the students take the activity seriously and are earnest in what they produce.  Such a student will get substantial credit for the blogging.

The underlying idea is to convince students that their job is to produce a narrative of their own creation that connects course ideas to their experiences and other things they already know.  This contrasts with the notion that their job is to absorb the narrative entirely produced by the instructor.  Further, through the commenting, the student should get the impression that teaching and learning is about having an extended conversation.  Then getting the course to click, as I mentioned above, happens when students come to realize they want such conversations.  Not every student does, to be sure.  But as it is a novel experience for them, the instructor can't know who does want it until the students have the opportunity to experience it.  Among the various components in my class, for the diligent students the blogging usually gets higher marks than the other parts of the course.

Near the end of the fall semester as the course is winding down, I offer the students the possibility of participating in a weekly discussion group in the spring.  In some sense this a way for those who want to participate to continue the conversation, albeit on other than course themes.  A few years ago I had a group of three that worked well and wrote about the experience here.  Since then I've had individual students take up the offer, which becomes more mentoring and less discussion, though I wouldn't split hairs about the difference between the two activities.  In one case this didn't get started till the following fall, but then the mentoring continued into the spring.  And in a couple of cases I've been having ongoing email threads with former students who have since graduated, as an alternative mode of discussion when face-to-face conversation is not available.  Further, I occasionally post to the class Web site, well after the course has concluded.  I know that at least a couple of students monitor that, since I recently posted about my stress fracture there and they sent their well wishes to me.  The point is that it is possible to stay in touch with students after the course is over and to do so in a more or less intensive manner.

This same point holds for the freshman seminar.  There is potential for the instructor to mentor a former student, at the student's inclination.  If the seminar were taught in the fall, the mentoring might be intensive in the following spring and then become more casual thereafter.  The contact would be there for the student if the student felt a need.  This is yet a different reason for the freshman seminar.  If the student comes to trust the instructor, that trust is itself an asset on which an ongoing relationship can be built.

* * * * *

I want to conclude in this last section with a few thoughts about whether my experience is too idiosyncratic to me to possibly scale or if it might actually be possible to pilot a program based on this experience and then expand beyond the pilot in due course.

Let me do some simple arithmetic first.  We have roughly 8,000 first-year students.  (Some, with enough AP credit, enter as sophomores.  They should be allowed to take this freshman seminar in spite of their class standing.)  If the enrollment in a particular seminar is 20 students, my preferred number, then we need about 400 such courses to give every student this sort of opportunity.  (We also have many transfer students.  They too should get the opportunity, so the numbers are actually far greater.   One wonders whether that would necessitate a different sort of course meant exclusively for transfer students or not.  I won't speculate on that further here.)

For a good pilot, I imagine having 10 different instructors teaching seminars in various fields.  Good candidates, in my view, are instructors who have taught WAC classes in the past, or who have done a lot of blogging on their own, or who have high marks as being very good teachers.  These sort of people probably could design their own seminars in according with their own inclinations.  As I have written elsewhere, early adopter types of instructors tend to produce good and interesting results, primarily as a consequence of their own creative efforts.  That doesn't mean the results can scale beyond the pilot and it doesn't indicate the sort of training other instructors would need before teaching such a seminar.  But it does seem clear that instructors beyond the pilot would need intensive training of some sort before teaching a seminar.

Scaling immediately beyond the pilot might happen with just the original set of instructors if additional labor were allowed to complement them.  Two possibilities are using students who have previously taken the seminar as helpers of various kinds, to offer feedback from a student perspective, and to use humanities instructors well trained in WAC to co-teach the course.  (Such co-teachers might be a way to eventually diffuse the approach to other instructors.)

Beyond this, one would have to ask what level of compensation would be needed to elicit participation from retirees who are potential instructors.  I don't know the answer to this, but I have a different experience to rely on to illustrate a possible answer.  In the previous decade, when I was still working full time, I taught for the Campus Honors Program.  I did that three times, twice Econ 101, and once a special topics class not in economics, which is the first time I taught with blogs.   They paid a fixed fee for teaching a CHP class.  I have no idea how they arrived at the number, but I interpreted the payment more as a recognition thing than a compensation thing.  Most of the CHP teaching then was done as an overload rather than on load.  It's recognition that would be needed here, in my judgment.

Also, there would need to be some resources put into tracking the students who took the seminar as well as to track parallel students who didn't take the seminar, so there is some basis of comparison.  The institution wants to know if the seminar matters for the student subsequently.  Undoubtedly, a pilot is insufficient to establish this proposition or to refute it entirely.  But it can be suggestive of the sort of follow up needed to understand the issues better.

I, for one, would rather teach freshman now, more for the upside potential than for any concrete sense of what can be achieved.  Battling senioritis is discouraging.  Having students who show interest in class only to see them graduate immediately also is less than satisfying.  I wonder if there are enough other retired faculty who see it the same way.

Tuesday, January 23, 2018

The Animal Alarm Clock

I want to apologize to readers of this blog for being offline so long.  The explanation is first, that I was having some general malaise and needed to take a break for that reason and second, I have a stress fracture in my foot and I need to keep it upright, which makes writing more of a challenge.  As it is this will be a very brief post.

Over the years I often find myself being awakened from sleep to the sound of the dog barking.  She wants to be let outside. This occurs, sometimes more than once, in a the middle of the night.  It also happens during the daytime, when I'm taking a nap.  It is such a common happening that I've gotten used to it and I do get up each time.

In the last few weeks something new has happened.  I hear the barking while I'm sleeping, but the sound is in my head only.  The dog is not wanting to be let out and she is probably asleep herself, elsewhere in the house.  Why this is happening I can't say.  It doesn't happen while I am awake.  My hearing is normally quite good then.

There probably is some psychological term for when an implied obligation produces a faux stimulus.  In this case it is very easy to confirm that it is faux.  All I have to do is look at the place where the dog is let outside.  If she's not there, then she didn't bark.   I started to wonder what would happen if there weren't such ready confirmation.  Is this the path to "hearing voices" or believing in "conspiracy theories?"

Thursday, January 11, 2018

In Praise of Anecdotal Information

Getting to know you,
Getting to feel free and easy
When I am with you,
Getting to know what to say 
Getting to KnowYou from The King and I

Sometimes I think computing and the Internet does a number on our common sense, to the point where we start to worship false idols, because doing so is trendy, rather than because there should be any reasonable expectation that they'll deliver the goods.  I want to push back on this some and in this post I will do that by making two points.  The first, I hope, is fairly obvious.  We should distinguish data form - quantitative versus narrative - the first is what Likert-style questions on a survey produce so that they can be aggregated across respondents while the second is what responses to paragraph questions produce, which often defy ready aggregation - from the quality of said information.  By information quality, I mean whether the information tells the observer anything of interest or if, instead, it is all largely useless.

I believe there has been a confounding of data form, with a strong preference for the analytical type of information, and information quality, assuming that the information will be high quality because there is a lot of it. This bias finds expression in "data driven decision making" and "data analytics" and has produced what I view to be pernicious business practices, some of which I will illustrate in this piece.  That point, in itself, I believe is straightforward.

The next point, however, is equally important, perhaps more so.  It is that poor usage of low quality analytic information that erodes trust in our institutions.  If we are ever to restore trust, we must make efforts to counter this push to analytical information, regardless of information quality, and move more to privileging anecdotal information, when it is high quality.

Let's begin with a very well known decision, one which readers are apt to regard with strong feelings.  This is James Comey's decision to announce that the Clinton email probe would be re-opened near to the 2016 election.   Here is my post mortem on that decision.  First, based on events subsequent to that choice, I think it fair to say that Comey made this choice for internal-to-the-FBI reasons.  He was afraid of leaks that would prove embarrassing.  So he was trying to get ahead of that.  He was not trying to impact the election.  Second, indeed he must have been certain that Clinton would win, which is what the polls at the time showed.  The polls are just the sort of analytic information that we tend to privilege.  Third, it is probably impossible to do an interview with Comey now to get him to do his own post mortem on that decision, but I assume he would have a great deal of regret about about it.  He underestimated the consequence of that choice on the election.

The upshot is is this.  You can have a straight shooter, with access to the data, and who is trying earnestly to make the right choice.  This in no way is a safeguard against making a bonehead play.  There still will be a a lot of residual uncertainty.  Using analytic data of this sort doesn't eliminate that.  In retrospect, a decision can look quite bad in these circumstances.

The Comey decision, as bad as it was for the country, was a one-off.  I want to turn to ongoing decisions. A particular practice that I'd prefer to take on in this piece is the online survey, delivered to people in two possible circumstances: (a) there was a recent transaction that involved a user where some stakeholder wants to get information about the user's experience, or (b) there was no transaction but the stakeholder has access to the user's email address, so the stakeholder solicits information about user experiences more broadly construed.  This practice should largely be abandoned, because the quality of information it produces is poor.  I know it is wishful thinking to hope for that, but below I will explain why the approach produces low information quality as well as to consider alternative practices that would produce better information and thereby improve the decision making.

First, let me consider one sort of data collection effort done in this vein, though mainly not done online, which is the course evaluations students fill out at the end of the semester.  At the U of I, those are referred to as ICES.  Students have no incentive to fill out these forms; they are done anonymously and since they are delivered at the end of the semester the feedback that is provided will not impact the instruction they are getting (and it might not impact the instruction the next time around as well for the following reason).  Course evaluations of this sort impact grade inflation.  Students want high grades and report greater satisfaction when they expect to get a high grade, regardless of how much they learned in the course.  So we get the phenomenon where satisfaction is reportedly high yet not much learning has taken place.  George Kuh refers to this as the Disengagement Compact.

Drilling down a little on this, one should note that the type of questions one would ask if the main goal was to improve instruction are quite different from the type of questions one would ask if the goal is to rate instruction.  Improvement happens via formative questions, much of which are simply about describing the current practice.  Evaluation happens via summative questions, thumbs up or thumbs down.  In theory, you might have a mixture of both types.  In practice, the summative questions trump the formative ones, as to what people tend to care about.  On the ICES, the first two questions are summative - rate the course and rate the instructor.  The rest of the questions hardly matter.

These lessons from the course evaluations apply to the type of surveys I'm critiquing here.  The user gets no benefit from doing the survey.  (Recently I've gotten requests that offer a lottery of winning some prize for giving a response.  It is impossible for the user to calculate the likelihood of winning that prize, so as to do some expected value calculation to determine whether it is reasonable compensation for completing the survey.  They do give an estimate for how long it takes to complete the survey. My sense is that the compensation is far too little - and as a retiree I'm time abundant.)   The user also isn't told how the person the user engaged with in the transaction will be impacted by the survey information.

A second issue that comes up, though less so with the course evaluations because much of the teaching practice is actually reasonably well set.  In general doing formative assessment, the questioner, ignorant of the user experience ahead of time, won't know what to ask.  So there needs to be some way for the user to define the relevant issues in the user's response and not be blocked from doing so by closed-ended questions that are off point.  Focus groups may be preferable to surveys, just for this reason, though focus groups are sometimes hard to assemble, which is why they are employed less extensively than they otherwise might be.  Within the survey itself, paragraph questions really are better for getting at the right formative question in that they give the user the ability to define the issues.  Of course, the person reading the responses then needs to determine whether a particular comment provides a guide for a sensible change to the process or if the comment is too idiosyncratic and therefore should be ignored.  I know of no algorithmic way to make that determination.  For me, it is more art than science.

Let me turn to a practice that I think can do things better, which might work if there are repeat transactions involving the same parties.   This is to evaluate the individual transaction or a small set of related transactions, and do that repeatedly, so it is evident that the evaluation information gets utilized in the subsequent transactions.  Further, this makes the evaluation seem part of an ongoing conversation, which is precisely what you would have if all the interactions between subject and evaluator were one-on-one.

I first did this sort of thing back in fall 2009, when I taught a CHP class that had evaluations of individual class sessions, writing this up in a post called More on Teaching with Blogs.  The survey I used can be found in that post.  It really was quite simple - 5 Likert questions on whether our discussion went well or not, emphasizing a variety of features that a good discussion should have, and then a paragraph question for general comments.  While the process was interesting for a while and did educate the students some about my goals for the discussion- eventually enthusiasm for it waned when the novelty wore off.  Further, because that class was small, only 17 students, I hadn't tried to repeat the approach in my teaching since.

I reconsidered this prior to the past fall semester.  I had been struggling with attendance so I wanted to do something to encourage students to come to class, but I wanted students to experience "gift exchange" in our class, so while I ended up recording attendance, I didn't want that measure to directly impact the grade.  What I came up with instead was for those who did come to class to have the option of filling out a survey just like the one in that post, and then giving the student a few bonus points for doing so.  (This means that the surveys couldn't be done anonymously, because there was no way to give credit in that case.)  After a few of these, but before I had said I'd stop delivering them, the forced response questions stopped having any meaning to me or to the students.  Most of the comments were about whether the discussion was effective and if it had only a few of the usual subjects participating or if it got more students involved.  As with that CHP class, some students commented that the discussion was good although they sat on the sidelines during it and only listened.  This is just the sort of thing that you learn from paragraph questions that you wouldn't anticipate ahead of time.

After we exhausted the bonus points I had planned to give out (and that were announced in the syllabus) I moved to an even simpler form, in case the students still wanted to give their comments on a class session.   I got one comment for the remainder of the semester; that was it.  Evidently, getting people to participate in giving this sort of feedback is a big deal, even if it very easy for them to do so.

Now let me segue to other sorts of transactions where I'm the one being asked to fill out the survey.  Later today I have a visit with my primary care physician, a routine visit that we do periodically.  Invariably, after that visit I will be sent a link to a questionnaire provided by the health provider's portal and then forwarded to the email account I have linked with that portal.  The first couple of times I filled these things out.  But I have since stopped doing that.  It is entirely unclear that there is any benefit to me for filling out such a survey.   Instead consider the following alternative.

Suppose that rather than evaluate my visit, there was an ongoing conversation about my wellness.  I've been taking blood pressure meds on an ongoing basis for several years.  I have the ability to monitor my blood pressure at home, but I don't always do that, especially if I've not been exercising and have overindulged with food and rink.  Suppose that periodically, perhaps monthly, I got a request from the nurse to upload my recent information, which would then be reviewed with perhaps a bit of back and forth electronically.  My incentives for providing that would be quite different than they are for filling out those surveys.  The information would be about my health and go directly to my healthcare provider.  And my course of behavior might be modified a bit based on this sort of communication.  This would be a much more holistic way to learn about my user experience and to promote my well being.  Further, it might not take that much more time on the provider end to do this type of tracking.  But, it wouldn't be used to evaluate the doctor or the nurse as to the quality of job hey are doing.  That wouldn't be the purpose.  My care and my health would be the purpose.  I have interest in both of those.

Moving to quite a different context, I've thought a bit about how Amazon might improve its processes (I have a Prime account) to focus less on the individual transaction and more on the overall experience.  Based on some discussion over the holidays with other members of my family, I'm quite convinced that most users don't understand Amazon pricing much at all - how long will videos be free to the user and when will they return to having a rental fee, for example?  It is disappointing to watch something for free, then either wanting to watch it again sometime later or watching another in that series, only to find that later the video has a rental.  It's the sort of experience that might drive the user to another provider.  Is there a way for Amazon to both elicit user preferences this way and to educate the user about time windows and prices?  If so, why don't they conceive of their evaluation effort more that way.  It is true that for fundamentally new items how many stars it gets might matter to other users, as that pattern of evaluation has already been established.  But is that the only thing that does matter?

Let me continue with a brief discussion about analytic information obtained purely by clicks, where there is no survey whatsoever, as well as what optional comments adds to this picture.  How useful is either sort of information and do they give the same picture or not?  Here let me note there is a tendency for dashboard presentation of analytic information, rather than provide the full distribution, perhaps because some people may be overwhelmed if the full distributions were provided.  Dashboards either give aggregates or means only, but otherwise don't give a sense of the underlying distribution.  For example, I have a profarvan YouTube channel and it gives monthly information about the top ten videos, measured by minutes watched and then again by number of views.  My number one video in December was on the Shapiro-Stiglitz model.  There were 150 views and the average length per view was 4:21.  The thing is, this video is more than a half-hour long.  It goes through all the algebra that's in this paper and there is quite a bit of it.  What is one to make of this information that the average length of viewing is so much shorter than the full video?

Based on comments on the video I received a few years ago, I gather that a handful of students go through the whole thing rather carefully.  The comments are from those students, who said it helped them really understand the paper.  Many more students, however, take a quick look and that's it.  A reality about online information today is that for many people it merits a glance, nothing more.  So I would describe the distribution of users as bi-modal.   There is a small mode of quite serious watchers and a much larger mode of casual viewers.  Is that good or bad?  Mostly, I don't know.  When it is students currently taking my course and they are in the casual viewer category, that's disappointing to me, but I have enough other sorts of observations like this to know that it will happen.  When it is students elsewhere, the viewing is entirely optional on their part, so this is consistent with them trying it out, mainly not finding it particularly useful, but once in a while there is a student for whom the video does hit the mark.  Naturally, I'd like the mode with the serious watchers to be larger.  But, I'm far less sure that I can do anything to impact that with the videos I do provide.  There simply isn't the sort of information you'd need to figure out what would convert a casual viewer into a serious watcher or, indeed, whether that is even possible.  In that sense the information YouTube does provide is quite rudimentary.  We don't know why the person checked out the video, nor why the person stopped viewing. The data given are simply insufficient to answer those questions.

Further, even with the comments on the videos, they are coming from viewers I haven't previously met.  So the comments are of a one-and-done form or perhaps a little back and forth but then it is over.  There is no ongoing conversation here.  (My own students would likely comment on the class site rather than in YouTube.  On the class site there is a better chance for the comments to be part of the ongoing class discussion.)  Truly valuable anecdotal information comes when there is both an ongoing conversation and there is a great level of trust between the participants, so they are willing to open up and given "the skinny."  Of course, for this to matter the person has to be an insider and have either information or a point of view that others would value having.  You might think this type of inside information is restricted only to VIPs.  My experience, however, is that it is much broader than that and that information gathering of this sort of information, time consuming to be sure, gives a much better picture of what is going on in a social setting than can be gotten merely by peering at the numbers.

Yet this information requires some human being to assemble it into a coherent picture.  Does our current tendency to pooh pooh anecdotal information speak to our inability to sketch such a picture?  Or is it simply an argument against coming to a conclusion based on far too little information altogether?  Can we agree that in a world of complexity understanding what is going on is an arduous task?  If so, can we then agree that information in all forms, analytic and anecdotal, can be helpful in producing that understanding, but that some information is very low quality so don't help that much?  If we can agree on these things, that would be progress, and this piece has found its mark.