Monthly Archives: December 2010

Getting learners to build things out of concrete (examples, that is).

This post is not about assessment.

A few weeks ago, before every other word on the internet became Wikileaks*, there was a lot of buzz about this piece in The Chronicle by someone who writes students’ essays for them, for money.

I’d like to think, gentle reader, that you, sitting alone at your computer with a cookie arrested halfway through its trajectory to your mouth, are reeling at this astonishing news: students buying their way through a degree? Say it ain’t so! But if you’re reading this blog at all, you probably already know about essay mills, so finish your cookie and let’s move on, because essay mills make me sad, and they probably make you sad, too.

Anyway, I got to reading the (numerous) comments on the Chronicle article and ended up at another essay-mill confessional. And this one absolutely stopped me in my tracks:

I doubt many experts spent most of a decade writing between one and five term papers a day on virtually every subject. I know something they don’t know; I know why students don’t understand thesis statements, argumentative writing, or proper citations.

It’s because students have never read term papers.

It’s so obvious, in hindsight: students never see enough essays to be able to abstract the rules of what makes a good one. I mean, think about the essay as a form — often the form — of undergraduate assessment: we’re basically asking students to build a working aircraft without ever having seen one. We give students some classes about Newtonian mechanics, show them a few force diagrams, then say “right, build me something that will fly me to France for lunch.” The students duly experiment, with most making emergency landings in fields, while we rally ’round, saying helpful things like “but you didn’t put the right kind of fuel in” and “why haven’t you finished building this wing?” Typical student responses include “there are different types of fuel?” and “but you didn’t cover that part of the wing in class.”**

No wonder some students find it easier to have a quiet word with their local aircraft retailer. And I’m not saying this to excuse the essay-mill companies, whom I deplore for the simple, selfish reason that they are devaluing university degrees, diminishing my own efforts and those of my students. I mean, I don’t think you will ever convince me that their aww-shucks-we’re-just-providing-exemplars-what-do-you-mean-students-are-handing-this-work-in-as-their-own schtick is anything other than a thin veneer of bullshit designed to stave off the lawyers. But I also think that if hacking the system is as easy as paying a few dollars here and a few dollars there to someone who will effectively learn for you, then, well, maybe the system isn’t very good. Simon Bostock has some nice thoughts here on why this problem won’t go away until universities wise up.

But as I say, this post isn’t really about assessment; it’s about learning. Quite a lot of our knowledge is rules-based, like knowing “I before E except after C”, and “don’t talk to Roger until he’s had his first coffee of the day”; we rely a lot on these rules of thumb to help us make sense of the world. Students’ whole lives are about learning rules: how to write an essay; how to format a references list; how to make sure the electricity in your flat doesn’t get cut off. Very, very broadly (it’s possible that this dirty shorthand explanation is going to upset some people), there are two ways of acquiring these rules: learning the abstract principles, and learning by experiencing concrete examples for oneself.

Guess which category most university education falls into.

None of this really cohered for me until I watched a colleague from a different department teaching a group of new students the Harvard style of academic referencing. While not the most stimulating topic, this is nevertheless pretty relevant, because it underpins much of students’ written work during their degree.

Here’s one way of teaching Harvard referencing:

* surname followed by initials

* year of publication

* Title of article

* Title of journal (italics), its volume (italics), page numbers.

These abstract rules work well as a recipe for writing out your own reference list, but they’re not that great if you’re actually trying to internalise the rules. They’re pre-digested; there’s no work left to do there, so the bits of information slide over us, and each other. There’s no friction. Also, there are a lot of pieces of information there: six(ish) basic components, but many more if you also include the order in which they must be assembled, and details like which bits get italicised and which don’t. That’s probably too many.

Here’s a different way of teaching referencing:

Aardvark, J.R. (1980). Ants, and how to eat them. Journal of Orycteropodidae Studies, 80, 11-17.

Barker, R. (1982). Rum babas, and what to do if you’ve got them. Reading: Goodnight From Him.

Halley, W. (1955) Rock Around The Clock. New York: Decca.

Izzard, E. (1998) Cake or Death? Gateaunomics, 10, 195-196.

Lemur, R.-T. (2010) Strepsirrhinoplasty. Antananarivo: Raft Press.

Leonard, E. (1996). Out of Sight. New York: Harper.

Shorty, G. (in press). Okay, so they got me. Los Angeles: Cadillac.

* What are the rules by which this reference list is organised? Name as many as you can.

Here, to understand the rules, we have to do a little work. But it’s sort of fun; working out the rules is a barely-game. And the thing about abstracting the rules for yourself in this way is that the process is messy, tracking its muddy footprints all over your memory. Which is exactly what you want.

Here’s a half-baked thought: you can’t teach abstract principles nearly as well as people can teach themselves using concrete examples.

Science as a university subject relies on practicals as well as theory, but we still spend a lot of time telling students what the rules are, rather than letting them abstract those rules for themselves. For starters, I think this is a very paternalistic*** way of treating people who are supposed to be adults. But also, I’m pretty sure it constitutes poor practice, since putting in a little mental effort is rewarded in the long-term by better retention and understanding. You thought your teachers were sadists, giving you worked example after worked example? Well, maybe they were — but my point is, they were actually helping you out, too.

But university is not high school. And the thing about being an ‘expert’ (and if you’re lecturing to university students, then you are, by many people’s definition, an expert — even if no-one fully understands what it is exactly that you are an expert in, other than that you “do something with computers”) … the thing about being an expert of any kind is that it’s so, so tempting to provide helpful short-cuts, like well-meaning parents who hand down sensible advice about life to their children. We’ve all been given that advice, and I’m pretty sure that we all learned more profoundly from the consequences of ignoring it than we ever would have if we’d listened in the first place. The trick that education often misses is that abstract rules are easy to ignore until we understand their relevance, by which time we’re usually pretty deep in our own personal concrete example. Or deep in something, anyway.

I recently spent some time with a friend who is trying to learn about organic chemistry but finding some aspects of it hard. I enjoyed chemistry at school (um, 20 years ago), so we sat down together for an hour to try and work through the IUPAC scheme for naming chemicals. Now, you can try to learn all the rules for naming molecules in organic chemistry, but there are lots – they go on for two or three pages of my friend’s textbook. That’s a lot of abstraction, and we know that concreteness helps us learn (stodgy academic explanation; human-readable explanation). So instead, we looked at some specific examples of structural formulae, along with their names, and tried to abstract the rules of naming based on the information we had. And you know what, it worked pretty well. In fact, naming in organic chemistry is basically a language and visualisation problem, not a chemistry one, so I learned the rules quicker than my friend did, because language and visualisation are more my bag than they are his. But I’ve yet to meet someone for whom the exemplar approach flat-out doesn’t work.

Of course, when I talk about abstracting the principles from a set of concrete examples, what I’m really talking about is pattern recognition. Pattern recognition will be one of the essential 21st-century skills. It’s not about finding information anymore — now it’s about finding the right information, rejecting the irrelevant stuff, and knowing how we might go about telling the difference. Hat-tip for that link goes to Lee Skallerup, who suggests

Get students to analyze the writing (and the comments) to see what kinds of patterns emerge, what they can see if they take the time to look.

If we want to prepare students for the 21st-century workplace, we should be teaching pattern recognition, using exemplars and letting students figure out the rules for themselves; those are the skills they are going to need when they go out into the world. It shouldn’t takes much effort to shoehorn this sort of activity into the classroom, or to get students to understand the basic process — we abstract rules from concrete examples all the time (take this discussion of what differentiates men’s and women’s shoulder bags, for instance). As @Dave_Ferguson points out in that post,

The effort to make the tacit knowledge more explicit encourages reflection and revision … Concrete examples help people work their way toward more general principles.

And here, try this on: “assessment should fundamentally be about building learners’ capacity to make informed judgements about their work” (@cathfenn, via @hypergogue). I couldn’t agree more with this: success as a teacher or learning facilitator is watching the learner walk away, not needing you anymore (and ideally, exceeding you). But to be able to get to the point of critiquing their own work, the learner must be able to move from tacit or implicit understanding of the rule to being able to describe it explicitly. And I’m struggling to think of a case in which having concrete examples would not make it easier for learners to explicate the rules in this way.

Once I started thinking about this abstract/concreteness issue, I started seeing it everywhere. For example, on my third-year module, I set students a piece of coursework that is opt-in, and pass/fail: in exchange for demonstrating that they’ve done some of the groundwork, successful students receive some additional information that will help them think about the topic. Quite often, students fail to grasp what is required of them (a failure of pattern recognition that may well originate in my explanation of the task), so I ask them to resubmit. Recently, a student emailed me work that wasn’t up to scratch, so I suggested she try again and resubmit, which she duly did. But when I got her work back a second time, it was still missing the point. So I thought about it for a bit, and then I sent her the additional material anyway. And you know what? I got a very nice, very grateful response, saying that she now realised exactly why her original submissions hadn’t been right. Three simple points of triangulation (two “wrong” answers and one “right” one) constituted enough information to start abstracting some rules.

Really, the more I think about it, the more I think that using concrete examples and letting students abstract the rules for themselves is really just another variation on show, don’t tell. Which is honestly the best advice for learning design — or communication of any kind — that I know. And hey, maybe if we want to assess learning in ways that are less easily hackable, we should engineer a system of assessment that requires students to show us, as well as telling us, what they’ve learned. Let’s have assessments that test (a) implicit knowledge of the rules, (b) explicit knowledge of the rules, (c) awareness of situations in which the rules may not apply, and (d) the learner’s awareness of their own progression in terms of grasping (a) through (c), because there’s nothing like getting students all metacognitively tooled-up.

Okay, so maybe this post was a little bit about assessment.

(PS — If you’d like to read more about abstraction and concreteness in learning design, you may find this short paper interesting.)

* Here I’m inserting a reference to current events, so you know I didn’t just record this sometime last year.

** Just so we’re clear, this is a metaphor; I’m fairly sure this is not how Aeronautical Engineering students are assessed. At least, I hope not.

*** I know it’s not the done thing to laugh at your own jokes, but recently I had occasion to tweet But paternalism is good for you. Here, swallow this. You have to seize these opportunities where you find them.

Advertisements

42 Comments

Filed under Uncategorized