Wednesday, November 05, 2014

Please call it Experimentation, not Trial and Error

Learning involves false starts, many of them.  The learner needs to get used to that.  Everyone says, "we learn from our mistakes."  Let's hope that is true.  Sometimes from my teacher's vantage it seems students have one or two false starts and that's all she wrote, though I may be too pessimistic about that.  I'd like to put that pessimism aside and start out on a higher plane in this piece by assuming that learning does eventually occur.  Some refer to the entire process, one false start begetting another until eventually something that's been tried seems to work reasonably well, as trial and error.  I wish that expression had never taken hold. It does not accurately reflect what goes on (or what should go on) in most cases. It literally means the following, if thought about mathematically.

There is a set of alternatives; call it S = {s1,s2,...,sn}.  Each alternative in S is deemed equally likely to solve a puzzle.  Beforehand, all that is known is that one alternative, si, solves the puzzle.  The other alternatives do not.  The question is what process should be followed to identify the solution to the puzzle.  That process is trial and error.

This possibly describes a few learning situations.  For example, if you have a bunch of keys on a key ring and they look the same and you know one opens the lock to the door you want to open, you'd go through the keys by trial and error to determine which key works.

Most learning situations are not solved by trial and error.  A priori the alternatives are not equally likely to solve the puzzle.  There are some that are more likely.  If the student asks herself, why dillydally?, then the student has reason to select the most likely alternative first.

Identifying which alternative is most likely requires intelligence.  A model (or if you prefer, a story) is determined to explain how the puzzle gets solved.  The intelligent student either has the model already on hand from the world view the student has brought to the task or the student has to generate such a model via learning from what the student has read, talking with the teacher or fellow students, pure introspection, or these things in some combination.  The alternative that is selected is the one that best fits the model.  And then, performing the experiment offers a test of the model. 

If that seems to work, it's one and done.  The interesting part happens when that initial experiment proves a false start.  I believe it matters a lot here where the model comes from and whether the person has some stake in the model being true.  It is much easier to discard a model as false if one has no skin in it being otherwise.  For the person with skin in the game of supporting the model it seems there are three possibilities:

a)  Stop experimenting entirely and leave the puzzle unsolved so as not to find further evidence that the model is in error.  In other words, refuse to learn because of being quite uneasy about what one will find.

b)  Repeat the experiment under the hope that "I did something wrong.  The next time it will work."  This is making the same mistake twice, something we're not supposed to do.  But we don't give up our old beliefs without a fight.  It is frightening to have to abandon something we hold as true. Resistance to doing that is natural.  So if the experiment suggests we need to abandon our beliefs then an immediate alternative hypothesis emerges - we made some mistake in performing the experiment.  Let's do it again to eliminate the possibility of such a mistake.

c)  We know our thinking needs to change and we start the search for another model with which to repeat the cycle.  

Just to keep things simple in discussing this I'm not going to get at where that prior belief comes from and how deeply it is held.  Also, to keep this from getting too abstract, I'm going to give a very simple example of what I'm talking about here.

I use Moodle as my way for students to check their within-course grades (on homework and exams, etc.)  I upload grades from an Excel workbook where I keep my grade book on my home computer and transfer them into Moodle by first saving the Excel file as a CSV file and then importing it into Moodle.  I only import a few columns at a time and what I've taken to do this year is first export those columns only from Moodle, along with the student bio information, fill in the blank columns with data from my grade book, and then import the filled in version of what I had previously exported back into Moodle.  This year and last the process worked okay, except for the grades of one student in the class.  I would get an error message that the student wasn't enrolled.  Yet the student shows up in the Class Roster in Moodle and there is a row in the Moodle grade book for the student.  So there is the puzzle.  In this case there is a work around.  I can manually enter that student's grades.  But the work around is a pain, so I'd like to find a better solution.

My model in this case is first that I know Moodle has both an import and export function for grades, as any instructor who uses Moodle can discover that these functions are built into the software.  Also, when I had a different problem last year and couldn't get the import to work at all, I got some help from the people who support Moodle about how to rectify that problem.  I followed their instructions and was able to get success, except for the problem I've just described.  My model doesn't go so far as to tell me how to troubleshoot this problem.  Presumably the people who support Moodle, armed with much more experience and possibly with a network of other support people with whom they can consult, likely have a more sophisticated model of what is going on here and with it they might be able to determine how to fix the problem.

I gather from my communication with them that the issue is related to which field within the bio information the instructor uses to identify the student.  I had been using the email address field.  It was suggested that instead I try using the UIN, the student identification number.  That seems to work.  Puzzle solved for me, though without understanding why there is a problem with email address.  And since I still don't understand that, I doubt it would have ever occurred to me on my own to try the UIN instead of the email address.  But it did occur to the support person do try this.

Let me zoom out from this example and now consider alternative (c) above.  There is as much learning in finding an alternative model as there is in doing anything else in the process.  One reason why the person is driven to look for a new model occurs when a pure trial and error approach appears to be a needle in the haystack search.  In other words, when S is a very large set, a different way than trial and error is needed.  One then wants to find an intelligent way to select the next alternative so a tolerable answer can be found in a reasonable amount of time.  That's a good  reason for model generation. 

There may be other reasons, such as wanting to understand similar but not identical puzzles.  One could grind through each of them in sequence.  Finding a model that addresses them all is an alternative that might get the learner to understand each of the puzzles in one fell swoop.  My goal here is not to explicate all the reasons for building models or uncovering models that others have built.  It is enough for now to assert that getting a new model is desirable.

Thus, the process has an element of design in it - finding a model that both can be solved and fits the circumstances provided by the puzzle. Sometimes the new model is a simple tweak on the old, as seemed to be the case with the resolution of my Moodle grade book issue.  Other times, the old model must be thrown out entirely, to be replaced by something completely different.  Obviously, there is more design effort in the latter than in the former.  And in that case, the learner is hoping for an Aha! moment to spark the new design.  It is this search for the new design and the hope for epiphany that is conveyed with the use of the word Experimentation but is completely denied by the expression Trial and Error.  This is the reason for abandoning that expression.

I now want to take the discussion entirely out of the realm where the Scientific Method is practiced in some way, shape, or form and move it to a subjective realm where there aren't right answers, just approaches where some appear more pleasing than others, such as how to make the point I'm trying to make in an essay such as this.  Does this piece convince readers to do what the title asks them to do?

After a little reflection, even if the reader is otherwise sympathetic to the approach I've taken above, there will be a realization that it far from complete in describing learning.  Among the bigger things it omits is puzzle generation.  It assumes the puzzle has been posed already, as has the the set of potential solutions.  Puzzle posing - asking a good question - is a very important skill, maybe more important than puzzle solution.  Irrespective of which is more important, what is clear is that they are interrelated.  Knowing what puzzles the person can solve strongly influences those questions that are asked.  Conversely, framing a question in an interesting way provides strong motivation for finding an answer to it. 

Once one gets on a roll in imaging how learning happens, there will be several other ways where the above will seem too limited.  Typically there are many puzzles that come together.  Some are subsidiary to the main one.  Others are interrelated.  For still others, the main puzzle may be subsidiary to them.

Then there is the sequencing of the learning.  New puzzles don't occur all at once.  They emerge from trying an alternative, observing the outcome, and reflecting on that.  The process is fundamentally open ended.  With learning in the classroom we tend to put some closure on it in mid stream or perhaps even earlier.  There is some imperative that demands a deliverable by a certain due date.  That imperative comes from outside the process.  It means the learning up to when the deliverable is turned in will be partial at best.  This is my teacher's lament about semesters and that instead students should just do research projects that conclude when they produce results, not at a fixed date.  Now I've gotten that out of my system and can return to the narrative.  Let me do so by talking about the puzzle generation and sequencing of learning for writing this post.

In this case, the first puzzle was triggered by my reading some description of how students learn in a book I'm trying to finish.  Trial and error was mentioned in that description.  I was bothered by that as a means of explaining how real learning occurs.  I re-read the paragraph and said to myself - I've got to write a blog post on this.

I do have skin in this game.  Five or six years ago I started to write a book called Guessing Games,  to give my views of what Education should be about.  I'm talking about Middle School through College (and beyond).  In a nutshell, the focus of Education should be on getting the student to learn to find the next model, having reached stage (c) described above. And the core hypothesis is that the student does this by developing an intuition for what might work.  Based on that intuition, the students guesses what the new model looks like. The argument is that we need to encourage our students to become good guessers.  But it is insufficient for students to rely on their gut as to where the solution lies.  They must perform the experiment to see how their solution does and learn from that as well.  The process is reasonably well articulated in Chapter 6, Guessing and Verification.  In that chapter it is argued that the other big thing students must develop, in addition to honing their intuition, is a sense of taste about what makes for a pleasing solution.  In the subjective realm, where most of us operate most of the time, it is this sense of taste that serves both as guide and as judge.  It is what I'm using in writing this essay and figuring out how to sequence the argument.  It's also what I use when proof reading the piece to see if it passes muster. 

I wish I could say that it's all system go for me, but alas I'm aware of a major weakness in my writing that I don't always know how to address, so I want to talk about both the yin and the yang when confronting one's sense of taste.

Being a social scientist at heart, with lots of fundamental training in Economics, but a healthy appreciation of other social sciences as well, I've come to rely on an approach to writing that is based first and foremost on building a model, a simple one so it can be readily articulated, then hanging my prose on that.  Milton Friedman is the quintessential exemplar of this approach.  You may not agree with his economic propositions, but you can't deny that he was excellent at making argument.  Paul Krugman follows in this approach with his NY Times columns, though he is more combative in his writing style in that he consistently assigns blame to others who disagree with him. 

In this blog post I've used Trial and Error both as a straw man and as a way to begin to articulate my model.  The expression "false start," with which the piece is introduced, brings into focus the critical question.  What does the learner do after the first thing that is tried doesn't work?  That question is the main puzzle.  By posing it one can tear down the straw man and replace it with a better hypothesis - there is intelligent guessing about the next model to be tried.  But some of the trial and error approach is retained in my model, the part I've chosen to call verification.

This leads to yet another question.  How do we know whether our solution works or not and does the trial we've put it through actually test whether it does work?  I avoid some minefields in the discussion that a fuller treatment would demand encountering.  The objective realm, where the scientific method provides the standard for hypothesis testing, often doesn't reveal truth so readily, especially in the social sciences where controlled experiments often can't be performed.  There is the further issue that people who have skin in the game about a model they purport to be true tend to overstate how the available evidence supports that model and likewise tend to ignore evidence that contradicts it.  This minefield is important to explore, but not here.  There is already enough to consider in this piece.  It is why I segued so quickly to the subjective realm.

The above pleases me.  The argument seems reasonably tight and well made.  But now there is the problem with the writing to confront.  People, and here I'm referring mainly to potential readers of this piece, don't like being told what is true.  They'd much rather figure it out for themselves.  Telling them is lecturing at them.  Such lecturing rarely works, even if the argument is extremely well done.  My writing often seems like lecturing.  It fails that way.  I ultimately gave up writing that book because at the time I didn't know how to make my writing less like lecture yet still make my points.  It is something I still need to learn, a puzzle I've not yet solved.  A different reason for writing this post is to put the question on the table.  If I could solve that puzzle I'd go back to writing the book.  I believe the argument needs to be made to a general audience.  But it needs to be made well enough that a general audience would want to read about it.

Let me close this piece with one other observation.  It is on the difference between experimentation, as I'm using the term here, and what Donald Schon called reflective practice.  The difference depends on who is driving the activity.  A novice can experiment, anyone can.  But the novice is not yet mature enough to engage in reflective practice.  Only an expert has that maturity.  So the thought that underlies this entire discussion is that learners should be considered apprentices on their way to becoming experts.  They should practice what experts do, even if they are not nearly as good as the experts when they are practicing.  It is the practice that will make them better practitioners and move them down the learning curve toward expertise.

No comments: