Ask, Don’t Tell: the power of questions

Scientists are not trained to ask questions.

No, excuse me — scientists are absolutely trained to ask questions. In the lab. In the lab we are rabid information ferrets, and we will run up every trouserleg that the great wardrobe in the sky sees fit to provide.

Scientists who become lecturers are not trained to ask questions — at least, not questions of the classroom variety (remember, we’re preoccupied with being subject matter experts). We are trained to talk. And talk. Seriously, if you like to pontificate, you could do worse than become a scientist. It’s like our national sport or something.

And so, at the end of class, because we know we’re supposed to ask this, we ask any questions? and nobody says anything — instead, the entire class launches into a frenzied scramble for their bags and coats. Because “any questions?” is about the worst thing you could possibly ask, and my students know it, even if they don’t explicitly realise it.

And yet, in defiance of the mute, are-we-done-yet hordes, a small trickle of students invariably arrives afterwards to ask questions, or to share something interesting and relevant from their lives. And sometimes, it feels like more teaching and more learning happens in those little conversations than in the whole of the lecture preceding them.

I experienced for myself, and am trying hard not to propagate, the cycle of abuse that is didactic, teacher-led education. “Sit down and shut up” is a powerful message to impose on children — and it’s clearly a sticky one, because by the time my students arrive at university, that’s their expectation of what should happen in class1. Ironically, when students don’t want to interact in class, it’s actually even harder not to ask things like “any questions?”, because we do it out of habit, and stressful situations are great for dredging up our most-ingrained routines.

“If you want to improve any element of your life, learn how to ask better questions.” (via Paul at Brain Friendly Trainer).

I’m a huge fan of asking questions: they’re the fast track to learning (a) how interesting the other person is [seriously: people are fascinating] and (b) all the stuff they know that you don’t. And pretty much everyone likes talking about themselves and their thoughts, so asking questions is good social grease, too.

Asking great questions is also a brilliant habit to build in the classroom. It’s a skill I’ve been quite slow to develop, but I’m getting into it. So here are a few ways that I’ve tried to bring more questions into my classes:

I already posted about how I turned a two-hour lecture into a two-hour problem-based learning session. This was great for two reasons: firstly, I asked the students a ton of questions, which normally isn’t something we make much time for in lectures. Second, and even more exciting, was that the students then started asking their own questions. In front of 60 other students. Seriously, if I do nothing else of value this academic year, I’d almost be okay just with that. (Well, not really. But you know.)

I added media clips to my lectures as an excuse to ask concept checking questions. I showed my students Jill Bolte Taylor’s TED talk about the subjective experience of having a stroke. Watch it if you haven’t already — not only does she bring great insights from her knowledge of the brain (she’s a neuroscientist), but she also gives the talk with great humour and humanity. And instead of giving students multiple-choice questions afterwards (Did the stroke attack (A) the left (B) the right or (C) both hemispheres of Jill Bolte Taylor’s brain?), I asked much harder questions: What are the physical and emotional consequences of a left-hemispheric stroke? How do you see objects that appear in your left visual field? Outline the path taken by information through your brain. And so on. (I let them talk through it with a partner first, before I threw it open to the class. Small steps.)

For reasons outlined here, I changed the format of student presentations to Pecha Kucha (which I must write about soon, because it completely deserves its own post). And we went from from “mostly the student talking” to “the student talks for a while and then everyone pitches in with questions and discussion” — which, for the record, is a way better experience. For everyone. (I collected questionnaire data that says so, too.) Nothing makes a class interactive faster than getting students interested enough in the subject to ask each other questions.

I stopped telling and started asking. This wasn’t a class-specific intervention, just something I’ve consciously started trying to do over the last couple of years every time students get stuck: I answer their questions with questions of my own. It seems especially useful when working with very reticent students, but it’s also a handy tool when guiding students who are struggling to express their thoughts on paper: how do you know that? What evidence do you have? Why is that relevant?

What have I learned? Asking questions works. I’ve had really positive feedback from students about these sessions, and I know in my heart that I’m asking better questions and getting students to think more actively about the problems I’m setting. I’ve also learned that if your concentration lapses, even for a moment, it’s really hard not to reflexively ask “any questions?”, so deeply ingrained is the concept. (I guess the only solution to that is more practice.)

Yes, these activities are all things I should have been doing to begin with — but remember that didactic, scientific background, and show me a little mercy; breaking the cycle of abuse can take a while.

And now I want to add a whole session on “asking questions” to the teaching certificate.

.

Edit, one day after posting: One other thing that I learned, just today, is that sometimes it’s okay to ask if there are any [further] questions, if everyone is good and warmed up, and you have time to spare. Because they were, and we did – and students came up with some great questions. Stuff I had no idea about, but about which it was fun to speculate. But I think people really have to be in that headspace and comfortable with the idea of asking questions in class before this will work.

1 Okay, some of students’ reticence in class is also driven by not wanting to look like an idiot in front of their peers, in case the question is “a stupid one” … one day I might turn up at class wearing a t-shirt that says THERE ARE NO STUPID QUESTIONS — ONLY STUPID LECTURERS, but you just know that’s going to backfire in ways that are both immensely embarrassing and completely predictable.

8 Comments

Filed under my stuff

The search for context in education and journalism (wicked problems, Wikipedia, and the rise of the info-ferret)

It’s a January evening, a schoolnight, and I’m sitting on my sofa thinking Stuff it. I’m tired and it’s dark and I worked hard today, damn it. It’s pretty hard, at that moment, to engage with with the things I know are really good for me, like going to the gym, eating right, and engaging with decent journalism that actually says something worthwhile about the state of the world.

Ah, journalism. Why is it so hard to engage with good, wholesome news? You know, instead of the junk-food variety?

Well, for starters, it takes effort; something in short supply when you consider that UK academics apparently rack up an average 55-hour working week. So if I sometimes choose entertainment over learning, maybe it’s because I’ve been thinking really hard for 11 hours already.

Here’s the more interesting question, though: why should it take so much effort to engage with the news? I think the record will show that I did okay in school and that I know a few long words. I can follow an argument; on a good day, I can win one. But watching or reading the news and really, really getting it (not just at the who-shot-whom level, but understanding why), frequently eludes me.

For the longest time, whenever I read the news, I’ve often felt the depressing sensation of lacking the background I need to understand the stories that seem truly important.

I didn’t write that, but I could have. By the time I’d got old enough to be properly interested in the ongoing story that is Northern Ireland, no newspaper was interested in explaining the context to me. I knew it had to do with territory, nationality and religious differences, but who were ‘republicans’? What did they want? The newspapers all assumed that I knew a whole bunch of stuff that actually, I didn’t know. The dictionary was no real help, the Internet was still in short trousers, and Wikipedia didn’t yet exist. (Not that we had the Internet at home. We didn’t even have a computer.) And I was at that delicate age where I didn’t want to look stupid by asking what might have been a dumb question. (Actually, it wasn’t a dumb question at all, but I didn’t know that then.)

We would shy away from stories that seemed to require a years-long familiarity with the news and incline instead toward ephemeral stories that didn’t take much background to understand—crime news, sports updates, celebrity gossip. This approach gave us plenty to talk about with friends, but I sensed it left us deprived of a broader understanding of a range of important issues that affect us without our knowing.

Secret-that’s-not-really-a-secret: the guy who wrote this is a journalist. His name is Matt Newman, and he’s reporting here for Harvard’s Nieman Foundation about how modern journalism bypasses context in favour of immediate, juicy details..

News is complicated. To make sense of complicated things, we need context. And the newspapers aren’t delivering that context; even journalists say so.

In fairness, context is hard to come by when — as with Northern Ireland — your story pretty much defines the phrase wicked problem (see also its big brother, The Middle East). How much information is ‘enough’? How much background would you need to really understand the issues surrounding Obama’s healthcare reforms? Or the debate on university fees?

We need something, and traditional news media aren’t providing it.

But we have Google and Wikipedia, right? So there’s really no excuse for not being able to find out at least something about nearly everything. Apparently, when a big news story breaks, people really do converge on Wikipedia, looking for context; we are a generation empowered, as no generation before us, to find stuff out.

Except.

Except that I still get emails from my students that read What does [word from the course materials] mean? I used to write lots of replies of the biting-my-tongue variety, politely suggesting that the student take advantage of the resources at their disposal1, but eventually I got fed up with this, and wrote an FAQ in which I was somewhat more blunt, though I hope in a kind way.

My favourite was a student who emailed me after a deadline, apologising for the poor quality of the coursework he had submitted, and explaining that he hadn’t known what one of the words in the essay question meant — so he had just tried his best and hoped. This wasn’t a word that was archaic or obscure. This was a word widely employed in psychology and related subjects. It’s not in the paper dictionary on my desk (which, admittedly, is 20 years old), but it’s very, very easy to find and learn about online.

It’s not about having access to the information; all my students have Internet access at least some of the time. Too many (N > 0) of my students are just not in the habit of looking for information when they get stuck, like someone forgot to tell them that the Internet is good for more than just email and Facebook.

But students will surf Wikipedia and YouTube all day long, given half a chance, so what’s that about?

At Playful ’09, Tassos Stevens talked about the power of indeterminacy, and whether, if someone throws a ball, you can look away before you find out if the other guy catches it. Suspense is immensely engaging.

Wikipedia is like this: it’s a barely game, where the idea is to answer as many “Ooh, what does that mean?” questions as possible, using only the links from one article to the next. In suspensefulness terms, Wikipedia is an infinite succession of ball-throws, sort of Hitchcock: The Basketball Years. (Okay, so Tassos was talking about cricket, but my point stands.)

But education obviously doesn’t feel like a barely game, because students don’t behave there like they do when they’re surfing Wikipedia. So I guess we need more suspense. This might mean being less didactic, and asking more questions. Preferably messy ones, with no right answers.

I think that if we really want to turn our students into information ferrets, running up the trouserlegs of their canonical texts to see what goodness might be lurking there in the dark [This metaphor is making me uncomfortable — Ed.] then maybe we, like the news media, need to get better at providing context.

If students email me with simple queries rather than trying to figure things out on their own, maybe it’s because the education system hasn’t been feeding their inner info-ferrets. (Note to schools: teaching kids how to google is a completely different issue from teaching them to google and making it into a habit, and some days, it feels like you only deal in the former.)

We exist, currently, on the cusp: everything’s supposed to be interactive, but not everyone’s got their heads around this yet. (“Wait — you mean we’re supposed to participate? Actively??”) The old-school, didactic models of education and journalism (“sit down, shut up and listen; we know best”) are crumbling. And some of the solutions about how to fix journalism look a lot like the arguments being rehearsed in education about how to make it valuable and keep it relevant: develop rich content that your customers can help build and be part of; accept that you might need a model which permits the existence of premium and budget customers. (This is going to be highly contentious in higher education, and I still don’t know what I think about it. But I don’t think the idea is going away anytime soon.)

I ran one of the many iterations of this post past Simon Bostock and he wrote back: Newspapers have learned the wrong lesson of attentionomics. I think they’ve got it bang-on as far as briefly grabbing our attention goes,2 but I don’t think it’s doing much for our understanding of the news, and some days, I worry that education is headed the same way.

Jason Fry asks, if we were starting today, would we do this? This is a great question for journalism, but it’s also pretty pertinent to education: we still teach students in ways that make only marginal concessions to the Internet’s existence, treating it as little more than a dictionary, encyclopedia, or storage container.

Given that nearly anything can be found with a few keystrokes, if we had to redesign education from scratch, what would it look like?

More like Wikipedia. More ferret-friendly. And maybe upside-down.

.

[Acknowledgements: major kudos to Simon for linking to Ed Yong's great piece on breaking the inverted pyramid in news reporting, for reading drafts of this post while I was losing my mind, and for the juicy, lowbrow goodness of LMGTFY, below.]

1 I suppose I could slam my students with Let Me Google That For You, but I prefer to save the passive-aggressive stuff for my nearest and dearest.

2 If this post were a headline, it would read STUDENTS TOO LAZY TO GOOGLE. (Admittedly this would be closely followed by SUB-EDITOR TOO DRUNK TO CRAFT ORIGINAL HEADLINE and BLOGGER CHEERFULLY IGNORES CLICHÉ.)

21 Comments

Filed under my stuff, other people's stuff

Why experts are morons: a recipe for academic success

This morning there was quite a bit of tweeting, back and forth, about this article and exactly how stupid it is.

“If our attention span constricts to the point where we can only take information in 140-character sentences, then that doesn’t bode too well for our future,” said Dr. Elias Aboujaoude, director of Stanford University’s Impulse Control Disorders Clinic at Stanford University.

Yup, you read that right. Some guy with a Ph.D. who works at one of the best universities in the world (and who’s sufficiently good at his job that they made him director of a clinic) is talking — to all appearances quite seriously — about the idea that the human attention span might shrink to the length of a tweet.

In other news, if the world were made of custard, global warming might lead to major dessertification, if we could just just bake an apple crumble big enough.

Maybe there’s a good explanation. Maybe Dr Aboujaoude’s remarks were taken out of context by the San Francisco Chronicle. Or maybe they threw him this ad absurdum scenario and he ran with it because he’s a nice guy and thinks that even if reporters pose a dumb question, it would still be rude to call them on it.

Here’s my ill-conceived, half-baked thesis for the day: experts are morons.

Why? Well, we get very excited over stuff we think is new, because we’ve been too busy down in our own little silos. I pissed Harvey off earlier by posting, in good faith, a link to Tyler Cowen’s TED talk about the dangerous appeal of stories.

Kids, don’t even try to sell Harvey old rope. Even if you didn’t know it was old rope. He’ll know.

What I ended up saying to Harvey was essentially Look, there’s a movement afoot to try to to get storytelling back into learning, to replace the content firehosing that passes for big education these days, McDonalds-style — and this talk serves as a useful reminder that stories are invariably a gross oversimplification of the evidence.

What I should have been saying was: Dude, I spent umpteen years becoming a subject matter expert, and at no point did anyone tell me that I needed to apply my critical faculties to delivering the material I researched so painstakingly. I’m new at this; cut me some slack!

(It turns out that Harvey and I were somewhat at cross-purposes; such are the limitations of 140-character ‘discussion’.)

Here’s the thing: academic success favours those who focus their critical faculties on developing their subject area expertise.

Below is a recipe for modest success in academic life and for becoming a legitimate ‘expert’. (Quantities and ingredients may vary according to your needs and experience.)

  • You need to be bright-ish. Not supernova bright, just bright enough. (If you’re too bright in school, you’ll get bored; see next point.)
  • You need to be well-behaved. (If you don’t behave, you’ll be labelled disruptive and that will do exactly what you think it will to your chances of academic success. Yes, even if you are bored because lessons are too easy.)
  • It helps to crave everyone’s approval. (If you don’t care what your teachers or parents think, why would you try hard on subjects that don’t really interest you?)
  • Questioning authority probably isn’t in your nature. (Or if it is, it’s a very specific kind of critical thinking, like “hey, maybe nukes aren’t that great an idea, mmkay?”) This will serve you well later, in your tenured career.
  • You are comfortable letting other people set goals for you (“You think I should go to university? Great!”)
  • You acquire a certain nerd-like pleasure (flow, if you like) from gnawing on very specific questions.
  • Your school years have conditioned you to understand that most people are mean, and best avoided.
  • Metaphorically or actually, you have let a thousand cups of tea go cold while you geek out on your chosen subject.
  • … okay, that much will get you through university and into a postgraduate programme (Masters or Ph.D.) At this point, it will be particular helpful if you can screen out information about the world around you, because this will just distract and confuse you about the relevance of what you are doing. (Having a crisis of meaning is one of the fundamental stages of doing a Ph.D.)

    If you survive this process and get your doctorate, you enter the world of teaching, admin, research, publication, and grant-getting — listed in increasing order of importance to your new employer. Your Ph.D., the entry requirement to academia that you have worked so hard on, also serves as your passport to teaching. Pause a moment to reflect on the weirdness of that statement: subject expertise is used as a measure of how competent you are to communicate that information meaningfully to non-experts.

    (Some universities, mine included, are trying to address this systemic shortcoming by getting new lecturers to do a teaching certificate. This is a lot better than nothing, but it’s also quite possible to do the absolute minimum required to pass, then go on your merry way, unmoved and largely unchanged. At least we do ‘peer observation’, which is a nice way of seeing what other people are up to; it’s hard not to reflect on your own teaching when watching someone else deliver a session.)

    Once you’re on the big shiny merry-go-round of teaching-admin-research-publication-grant-getting, it’s even harder to drag your ass out of the mire of just trying to keep up with your subject area and across the road into the big field of flowers that is good educational practice. And when you do manage to haul yourself over there (at the cost, by the way, of time spent on research/publication/grant application — and no-one is going to reward you for that choice), you get disproportionately excited when people show you some of the shiny things that exist in the world, because you’ve been far, far too busy becoming a subject expert to notice them. This can make educators look like big, dumb puppies — for example when we’re over-keen to co-opt neuroscience.

    The other side-effect of being an ‘expert’ is that if you’re not naturally inclined to cause trouble, question the system, or think critically about more than subject-matter problems (and remember, you have floated to the top of an educational system that rewards exactly those qualities), then sometimes you end up saying really dumb stuff, because you’re too busy thinking “ooh, that would be interesting” — like what if we really could only take in 140 characters’ worth of stuff before our attention drifted — to fully consider the validity of the question.

    None of this is an excuse for living up to the ‘woolly professor’ stereotype, but I hope it helps to explain to people like Harvey why experts sometimes sound like they’re rediscovering — or even reinventing — the wheel. And as for us ‘experts’ (and boy, am I uncomfortable with that label) we need to try harder to think about the practical applications of what we do — and to remember, once in a while, to apply those finely-honed critical thinking skills to something other than our own subject areas. We’re not really morons, but to the casual observer, it’s an easy mistake to make.

    .

    Obligatory afterword: there are a number of stellar educators who really do manage to apply their critical faculties to more than just their own subject area, and who manage to get through university and postgraduate qualifications despite asking really awkward questions and rocking the boat. If they ever isolate a gene for that, we should all go get spliced right away.

    16 Comments

    Filed under my stuff, other people's stuff

    When giving presentations, the only rule that matters is the rule of attention

    Recently I was discussing presentations with a friend who is a student. Although being asked to make a presentation is a fairly common part of the student experience, and he has a reasonable idea of what’s involved, nobody has ever taught him or his peers how to do it.

    Because I spend more time thinking about presentations than is strictly healthy, I offered to write my friend an email, summarising my thoughts. But once I got started, it very quickly turned into a monster email, the kind that people tend to skim once and then write a quick one-line reply along the lines of “Thanks, that looks really interesting — I’ll come back to it when I have more time,” maybe because they’re intimidated by the sheer volume and content of it all. (Yeah, okay, this is really about me and how I procrastinate over reading emails that look like they will be hard work. You’re listening to WKLJ — the sound of guilty conscience.) Plus, numerous URLs turn email into hyperlink soup.

    So instead of sending my friend an email, I wrote this blog post. It’s ostensibly about the mistakes students make when they give presentations, but really it’s about how the only rules you need to know about giving a good presentation are the ones about human attention.

    Here are some common mistakes I see in student presentations:

    * Not having practiced the presentation enough.
    * Not knowing enough details of the story, including germane technical details/terminology/pronunciation.
    * Not picking a topic that they actually find interesting
    * Confusing slide preparation with presentation preparation.
    * Putting too much information on each slide.
    * Not thinking about what it will be like to be the audience for this presentation, rather than the presenter

    Notice how ‘being nervous’ is not on that list. We understand that students will be nervous about giving a presentation — being nervous about doing something fairly new in front of other people is completely understandable, and aside from one or two freakish individuals who take to presenting as though they’ve been doing it all their lives, everyone’s in the same boat. So relax :)

    None of those mistakes are really about what happens during the presentation: they are all about how students prepare for the presentation beforehand. My impression from several years of watching students give presentations is that they are quite relaxed about the preparation, then get scared when it comes to the presentation itself. But by the time you are ready to give your presentation, it’s too late to be nervous — because by then, you’ve either put in the work, or you haven’t. Preparation is worth being nervous about; standing up and talking isn’t.

    Ignore all the ‘rules’ about how to structure your slides. For every rule, there will be at least one instance in which it is not valid. Knowing which rules to follow and which to break is mostly a matter of practice and experience — which you may not have. So ignore, or at least treat with extreme suspicion, anything that sounds like a rule. Common rules include:

    * Use X lines of text/bullet-points per slide
    * Plan one slide for every N seconds of your talk
    * The 10/20/30 rule

    These all sound perfectly sensible, but the trouble with rules is that people cling to them for reassurance, and what was originally intended as a guideline quickly becomes a noose. My opposition to putting reams of text on slides is well documented, but I bet there are presentations out there where that’s exactly what’s required — at least, on one or two slides. Likewise, having more than ten slides might be exactly what you need; hell, you might need a hundred. Rules stipulating the number of slides you should have, or how fast you should transition between them, conveniently ignore that these aspects of your presentation depend on (a) what you are talking about, (b) what’s on your slide, and (c) how long that takes your audience to apprehend. Rules about slides are rubbish, because they stop you from thinking critically about what — if anything — you need to show in support of the point you want to make.

    Ready-to-fill slide layouts are just another kind of rule. When you open Powerpoint and Keynote, they instantly start making suggestions about the layout of your slides. Bullet-lists feature prominently. When was the last time you enjoyed a presentation that had page after page of bullet points? Once you’ve figured out the story you’re telling, think about how each point could best be conveyed visually, and about whether you even need slides or visual aids at all.

    Concentrate on the rules of attention. The thing you most want during a presentation is people’s attention, so everything you do and say has to be about capturing that, and then keeping it. The rules of attention are more or less universal, easier to demonstrate empirically than rules about specific slide formats, and can be neatly summarised as follows: people get bored easily.

    Some specific rules of attention are:

    People can really only retain about four bits of new, unrelated information — and sometimes not even that many. Don’t overstuff your presentation, and take care to signpost the key points — visually, verbally, however you want.

    It’s hard to process spoken and written words at the same time. Integrating your spoken words with pictorial slides makes it easier for the brain to process these two streams of information efficiently. This also helps your audience remember more of what you said.

    A story will keep people’s attention, because they will want to know what happens next. At Playful ’09 last week, Tassos Stevens talked about the compelling nature of indeterminacy, and asked the question Once a ball has been thrown, is it possible to look away before you know whether someone catches it? If you don’t know what your story is, or don’t convey that story clearly to your audience, they won’t stay focused; as Hitchcock knew very well, it’s all about suspense.

    People really like looking at screens. If you’ve ever been in a pub with the TV on and the sound off, you’ll know that screens are an attention-magnet. This is great when you’re giving your presentation and there’s something on the slide that you want people to look at, but not so great if they are still looking at the slide while you are talking about something else. There’s an easy fix — press B or W while in Slideshow mode: the screen will go black or white, respectively (this works in both Keynote and Powerpoint), and people’s attention will focus on you, because now you are the moving, shiny thing in the room. Toggle the same key when you’re ready to direct the audience’s attention to the screen again.

    Sustaining audience attention requires frequent changes. Simon Bostock once tweeted something great about how flow is when you stop noticing the joins between one parcel of attention and the next; this is the state you want to induce in your audience. Paradoxically, in order to get them to concentrate on something for a long time, you need to keep changing the thing they’re paying attention to, or they will get bored. Change stuff mindfully: I don’t mean adding clip-art or unrelated animations to your slides, I mean introduce something seriously astonishing. (Unexpectedness is a brilliant tool for wrangling people’s attention.) Less dramatically, you could use changes in your tone of voice, speaking volume, or where you are standing to draw the audience’s attention to a particular point. Evaluate your slides and consider whether they could be less formulaic; consider introducing some audience participation to get everyone out of the you-talk-while-they-listen rut.

    Your audience will tell you when their attention is wandering. Hopefully not out loud, and hopefully not by harshtagging your presentation. But you will know from looking at their faces where their attention is, and if it isn’t on you or your visual aids, you will know that you need to change something. Don’t be afraid to go a bit off-road in the service of keeping people interested; it’s a kindness and a courtesy to stay with your audience, and a presenter on auto-pilot is not a pretty sight.

    There are so many more things I could write about attention and presentations, but this is already overlong. So yeah, last rule: short is good. Like I said, rules are for breaking.

    .

    Edit: There are some great additional points in the comments below.

    Edit 2: Olivia Mitchell has written a great post about seven ways to keep your audience’s attention. We’re all about attention hacks here at finiteattentionspan!

    [Marginalia: (1) Aesthetic is not a rule. Having a consistent look-and-feel (good colour palettes, consistent use of fonts and text size) can really elevate a presentation. (2) Constraints are not the same as rules. Obviously, most presentations will have a time-limit, and you need to respect that. And if you are doing Ignite or Pecha Kucha, there are some very specific constraints about slide timing (and, necessarily, about what goes on the slide, since viewing time is so short). But constraints are great news for creativity.]

    75 Comments

    Filed under my stuff

    Everything is upside-down: turning lectures into homework with problem-based learning

    The other day, I stumbled (via Tony Baldasaro) on this gem:

    How much more could happen in our classrooms if we created more opportunities for students to learn basic skills and content outside of class? … Class, rather than being a time when all kids sat and received instruction, could be the time when they reinforce skills by doing problem sets, worked on real-world application projects, collaborated with teachers to reinforce concepts, etc…

    The post is called Inversions; go read it, it’s only short.

    This is such a wonderful, simple idea. And of course, many good instructors and educators are already doing just that — as Chris Lehmann points out, this is essentially what happens in English class when kids read a book as homework, then discuss it during class time. Students use out-of-class time to acquire content, freeing up class time for process. Because processing, doing, is how we learn, and students can get instant feedback from the instructor. Dialogue happens; moreover, students have the opportunity to learn vicariously from other students’ participation.

    But this isn’t happening enough in universities, for reasons I have written about before. Big classes and a student body working to pay university fees — or to be able to afford food — mean that often, lectures become an info-dump, because you can’t guarantee that the majority of students have done the reading — and in my view, good teaching takes up from where the student is, not where they should be.

    And I do get tired of the sound of my own voice in a two-hour lecture. Oh, I can teach for two hours; this post isn’t coming from a place of laziness. On some level I am probably even a bit of a show-off, or I probably wouldn’t enjoy teaching as much as I do. But, you know, no matter how enthusiastic I am, just talking for two hours is going to lose even the keenest student for periods of time, as their attention ebbs and flows. Estimates of attention span vary wildly, and a big chunk of that is about whether you are in flow.

    Passive listening probably does not encourage flow in our students.

    Attention span also varies as a function of ability, which is one reason why it’s so important to teach in a way that reaches everyone. And it’s unreasonable, I think, to expect anyone’s attention to last for a two-hour lecture, which is why so many of my colleagues are currently trying to think of ways to break up the time a bit. (The university schedules two-hour lectures in the way that many people schedule one-hour meetings: it seems to be a convenient and universally-understood unit of time, but may not be exactly what is needed.)

    So how about we approach this problem from the other direction: make the classroom about practice, and perhaps we can nurture people’s curiosity in the topic and encourage them to pursue the more detailed background content afterwards?

    Obviously this strategy is not without risk. Techniques like problem-based learning (PBL) have been found to improve students’ engagement and critical thinking skills, and students who have used PBL seem to hold their own against students educated more traditionally. But I have heard many concerns expressed that PBL can lead to patchy subject knowledge, though I am having trouble digging up much in the way of evidence for that (if you can help me out here, please leave a comment!). Wikipedia has a nice section on the cognitive load issues around problem-based learning; the key thing seems to be to start gently and gradually withdraw support, with the instructor increasingly becoming more of a facilitator.

    I wouldn’t necessarily have tried this with first-year students, who perhaps haven’t acquired enough basic subject knowledge. But final-year students have been up to their elbows in the subject for long enough that I figured I could probably meet them halfway.

    So, I rewrote my lecture.

    In fact, my slides didn’t actually need a great deal of reworking, though I took some more of the text off them. I made lots of duplicate slides: the first with an image, and a question or two; the second, with simple labels. It was a pretty basic format: here’s some stuff — now figure out what you’re looking at.

    PBL hippocampus question.png

    And then, when they’d had a few minutes, in small groups, to try and work out what was going on, I’d ask for suggestions, and we’d talk a bit about those, and then I’d show them the second slide:

    PBL hippocampus answer.png

    … and we’d talk about that for a short while. I started off with some basics, and then we got into more and more complex stuff. Occasionally I would remind them, “start with what you already know.” Students had a worksheet that duplicated the images and questions, so they didn’t waste time and attention copying things down, and could concentrate on the what and why.

    We did this for two hours (with a break), in a warm lecture theatre, in the afternoon, and nobody fell asleep. Students asked questions, made guesses. It was genuinely interactive.

    In many ways, I was lucky. This lecture was all about the visuals: pictures of brains with stuff wrong with them. Had I been discussing highly abstract and theoretical concepts, it may not have worked well. Further, the lecture theatre was pretty much exactly right for the size of class: small, with about 60 seats and an aisle up the middle. I could, and did, reach all the groups; had we been in the 450-seat lecture theatre with people sprinkled everywhere, much of that class dynamic and atmosphere would have been lost.

    Of course, not everything went brilliantly. There was a little too much content, and what I should have done was set the remainder as homework, rather than trying to cram it all in. I lost one group at the break, though this isn’t uncommon and you never really know why they’ve left; often it may be nothing to do with you and everything to do with their personal circumstances, and I never like to ask, in case it really is the latter and they are mortified that you’ve brought it up, or noticed their absence.

    I won’t really be able to gauge the success of the session until the exam results, and student module evaluations, are in. But overall, it felt right. It felt like a good way to teach, and I really, really hope it inspired students to tackle the background reading. The explicit feedback I have had from students so far has been pretty positive, and a colleague who sat in on the session to observe seemed to really enjoy it, and said some very nice things. All of which gives me a little more faith in my own experience and enjoyment of the session.

    Next stop: trying this again, with a bigger class. Anyone want to play along?

    14 Comments

    Filed under my stuff

    Show me the evidence! Why education needs more science interpreters.

    In his otherwise laudable Really Bad Powerpoint, Seth Godin writes:

    Our brains have two sides. The right side is emotional and moody. The left side is focused on dexterity, facts and hard data. When you show up to give a presentation, people want to use both parts of their brain.

    This assessment of the hemispheres’ respective functions is about right, albeit oversimplified. But my problem with the above quote is that the relative locations of the factual and emotional centres of the brain have no real bearing on the argument, and come across as window-dressing to make the whole argument seem more scientific. (I am not suggesting that Seth did this deliberately, merely pointing out how it reads.) Seth asserts that people want to be entertained (that is, be stimulated emotionally) as well as being given the facts, and I doubt any psychologist, educator or presenter would disagree. But what he needed to say was:

    1. People respond to emotional as well as factual arguments.
    
2. The emotional and factual centres of the brain are in opposite hemispheres.
    
3. There is evidence that arguments which increase activity in both hemispheres are more persuasive.

    (I don’t know if there is any evidence for (3), but I think you would need some to make this point convincingly.)

    This might sound picky, but it’s important: people see the shiny science bit and their critical faculties just switch off. We don’t ask how, or why, and we don’t demand evidence, because we are persuaded and reassured by the presence of an ‘expert’. (This is perhaps best typified by Milgram’s infamous obedience study of the 1960s. The 50s, 60s and 70s — a period I like to think of as B.E., Before Ethics — were a golden era in terms of understanding human behaviour but then people realised that it was perhaps a bit mean to do this or this to people without some serious questions being asked. How the wheel turned again and we got from the post-60s ethics backlash to Big Brother, I’m not really sure; I guess wheels just do that.)

    Anyway, this abdication of our critical faculties in the face of ‘science’ is regularly exploited by advertising — look at the proliferation of ‘experts’ in commercials for things that clean, or that claim to protect you and your family from harm. But as the man in the white coat has deservedly become an advertising cliché, so people with something to sell have begun to look for a newer, shinier, more cutting-edge science with which to hawk their wares.

    Enter neuroscience.

    Neuroscience-as-sales-tool is huge. At face value, it doesn’t represent much of an advance over old-school advertising: “Look: science!” But in fact, its value is extraordinary: “Here’s a picture of the brain of someone using our product!” Think about that for a moment and realise the awesome power of being able to say This is what’s happening inside someone’s head while they experience our product. That’s pretty amazing.

    Advertisers have quickly realised the potential of neuromarketing. Some movie distributor or other wanted us to use it while I was doing my PhD, but we couldn’t turn the images around quickly enough for their deadline (fMRI takes time — or used to, anyway). Coca-Cola did it, though I’m not sure they controlled for the fact that caffeine can act as a vasoconstrictor. Anyway, get used to those images of brains, because they’re here to stay — at least until we find the Next Shiny Thing.

    Here’s my sad realisation of the week: education, which has been a bit slow to adopt technology but is finally waking up to neuroscience — education is taking advantage of our human weakness for experts and shiny-looking science.

    The other morning, I worked myself up into a fine old froth over a website* written by someone with impeccable educational credentials, that seems to exist for the sole purpose of encouraging people to consider neuroscience (and related fields) when constructing the educational experience. I mean, this site is clearly out to make the world — and education in particular — a better place. A place informed by science.

    Criticising this site would be like kicking a well-meaning little old lady, right?

    Well, I’m gonna.

    (Disclaimer: I would never kick old ladies, and what you do in your own time is your business — but if I find out you are spending it kicking little old ladies, I am going to come over there and Have Words.)

    The big, insidious problem at the interface between neuroscience and education is that there are many people talking the talk, but not so many walking the walk. Like the old Far Side cartoon, when I see websites like this, all I hear is:

    blah blah blah blah neuro blah blah blah blah education blah blah blah neuro neuro neuro!!!!!111!!11! education education neuro blah blah blah blah neuro!!

    This specific website was a prime example: lip-service to informing education through neuroscience: pages and pages. Evidence and specific examples of how this can be done: zip. Nada. Nothing.

    This little old lady’s been feeding the urban pigeons, a kindly but perhaps misguided act. She’s been siphoning off her pension to fund an underground fascist group on whom she dotes, because they seem like such nice, polite boys. She looks so sweet, but she’s actually perpetuating harm, because educators everywhere are losing their grip on the need to use science and evidence responsibly. If their role models don’t do it, why should they?

    It would be okay — and so would my blood pressure — if this were an isolated example (goodness knows the ‘Net has its share of crazies), but it isn’t. Online educators are obsessed with neuroscience, but often don’t clarify the relationship between the educational practices they espouse and the neuroscience fairy-dust they are currently sprinkling over everything. Evidence, people. Evidence and concrete examples.

    In a crankier moment earlier this week, I wrote:

    You don’t get to co-opt my science without following its rules.

    And the #1 rule of my science is this: show me the evidence.

    Maybe this is too harsh. There are issues here about elitism and the availability of expertise: if neuroscience isn’t your background, isn’t it a bit unreasonable to expect you to understand it and write about it coherently?

    Well, maybe. Certainly it seems unfair for the taxpayer to fund education and get nothing back — we need to make academic findings easier to access and easier for the layperson to understand, rather than hiding everything behind a journal paywall. But also, I think it’s incumbent on those of us who do speak neuroscience to educate those who don’t — not just about our findings, but also about responsible interpretation of those findings.

    Last thing. A couple of weeks ago, I spoke at TCUK09 about how bullet-point-loaded slides might be less memorable than sparser slides . (Olivia Mitchell has a great summary of the research here).

    Reponses to our work have been either:

    1. “Fantastic — finally evidence for something we’ve known or suspected all along!”

    or

    2. “Hi. I work for X, selling Y, and I wonder if you can tell me/are interested in … “

    But overwhelmingly, it has been (1).

    I think this is really positive — that people do actually get excited about evidence. And I think we can, and should, build on that willingness to be excited by scientific data, until it becomes unremarkable that non-experts are capable of critically evaluating scientific arguments.

    .

    * I won’t link to it here, because I don’t wish to offend anyone or start some kind of internet tiff** — and besides, there are many such sites out there, so why single one out?

    ** tiff, for those of you younger than 30, has other meanings besides ‘a graphics file format you almost never use’.

    22 Comments

    Filed under Uncategorized

    Stealing From Geeks, Part 2: Educators need to geek out, big time

    Other people’s presentation slides used to drive me crazy. “You’ve got Arial and Times New Roman and fifteen lines of text in 14-point font! Those colours are hideous! Stop with the serif fonts already! Are you going to read aloud every point?”

    Then I gave up caffeine.

    No, really — about two years ago, a casual conversation with my colleague Andy about minimalist slide design in teaching suddenly sat up and grew legs. We went from idle discussion to brainstorming ideas to me going home over Christmas wondering if I would get my brain to slow down to less than 1,000rpm. We managed to secure funding from the Centre for Research-Informed Teaching, and for the last 18 months, we’ve been exploring the effects of using minimalist slide presentations on people’s memory for information. I blog about it, think about it, and chase down ideas that might relate to it. I have even — *shudder* — acquired new skills to pursue it.

    In short, I have well and truly geeked out over my research. And it feels great.

    I posted last(ish) time about how education can learn from the technology sector by growing its own storytellers and role models, but I think there’s plenty more to take away from the home of geek, starting with trying to become one.

    Here’s the thing they don’t tell you in school: your inner geek is the most powerful learning resource you will ever have. It’s the thing keeping you at your computer or from putting down your book until well past bedtime; the thing needling you with “Hey, that’s interesting …” It holds your attention when you’re unfocused; delights or enrages you in the face of apathy or exhaustion. Your inner geek won’t rest until it consumes you in the fire of your own attention.

    Harness this awesome power, and you can do nearly anything you want: a geek illuminated from within by the source of their own geeky pleasure is one of the brightest lights in the universe.

    Geek, should you need to know how to get there, is basically a place where your interests and your strengths meet:

    your geek space.png

    (And since we’re on a Venn diagram jag, why not check whether you’re a dweeb, a geek, a nerd, or a dork?)

    Getting in touch with your inner geek is the fast track to achievement. Over the last two years, I’ve worked harder than I ever worked in my life — yes, even during my Ph.D. — and I’ve loved every minute. Hard work isn’t all that hard if it’s doing something you love. I also got to take our work to conferences in San Francisco and Corfu; being a geek comes with some pretty cool perks. (Okay, so I also got to go to Milton Keynes. This was a useful exercise in humility.)

    Geeking out provides students with good role models, giving them permission to indulge their own intellect and curiosity. Show me a good educator, and I’ll show you someone whose teaching involves some variation on “Hey, look at this — isn’t that cool?” Students need to see that geeking out can lead to rewarding careers. Adam Savage and Jamie Hyneman of Mythbusters have become poster-boys for scientific curiosity, but they also get invited to the Emmys. I want to give them both a big hug for making being a geek cool; the cooler being curious and knowledgeable becomes, the easier it will be for students everywhere to own their inner geek and move forward in the world.

    Education can help shape a culture in which geeking out is not just socially acceptable, but actually desirable. One of the big lies often peddled about geeks is that we’re happiest alone. I don’t think that’s true: the internet in its current form basically exists because geeks liked talking to other geeks. (Or at least reading about them from a safe distance.) When geeks hook up and reinforce their shared geekiness, amazing things happen. You see this in academic departments and at conferences where conversations blossom into full-on nerdouts as two or more people realise they have an interest in common, often kicking off with “Hey, do you know if … ?” It happened to me; you wouldn’t be reading this if it hadn’t.

    Most technological developments of the last two decades (centuries? millennia?) were created by geeks who didn’t care whether people knew they were smart; who didn’t worry about looking cool, because they were too busy chasing down their idea. Education needs to reclaim that indifference to what’s “cool” and set about showing that growing and following a passion is one of the most rewarding — and genuinely cool — things you can do.

    We don’t geek out enough; we certainly don’t let our students see us geeking out enough. Understanding and enjoying focused obsession is far too good a thing to keep all to ourselves.

    Geek out, and don’t look back.

    5 Comments

    Filed under my stuff, other people's stuff