Category Archives: other people’s stuff

How your meetings could be more like classes

Recently, I read a post by Rands about how to run a meeting, and was blown away. Not because of Rands’ excellent writing (though it is; it always is), but because in explaining the attentional dynamics of how to run meetings, he was really explaining how to manage a classroom. I had a bit of a lightbulb moment right there.

I’d never thought about meetings as places that could be like a classroom before, despite the fact that many of the meetings I attend are actually held in classrooms. (Collect one Dunce Point; do not pass GO, do not collect $200.) Oh sure, I understand that you need a facilitator to ensure that everyone who has something to say gets to say it, and that people whose verbosity exceeds their contribution don’t dominate the space. But what Rands is talking about is attention wrangling: making sure everyone stays focused and contributes, and that people go away with their knowledge and understanding improved, and with a clear idea of where they are going next.

This is absolutely what being an educator is all about.

Rands writes:

A referee’s job is to shape the meeting to meet the requirements of the agenda and the expectations of the participants. Style and execution vary wildly from referee to referee, but the defining characteristic is the perceptions of the meeting participants. A good referee is not only making sure the majority of the attendees believe progress is being made, they are aware of who does not believe that progress is being made at any given moment.

… which isn’t really all that far from:

An educator’s job is to shape the class to meet the requirements of the curriculum and the needs of the learners. Style and execution vary wildly from educator to educator, but the defining characteristic is the engagement of the learners. A good educator is not only making sure that the majority of the attendees are learning, they are aware of who is not learning at any given moment.

If you want to take this analogy further, you can think of traditional, top-down, boss-runs-everything meetings as primary education, where the teacher is very much in charge, and hands down information with minimal critique or interrogation from those in attendance. At the other end of the spectrum, adult education at its best is all about facilitating sessions with a light touch, allowing everyone to explore the material for themselves while staying on track. And gosh, I wish I attended more meetings like that. I mean, by the time someone’s old enough to attend a business meeting, they’re old enough to be treated like an adult, right?

Rands’ post made me think about the discussions we are having in higher education as we start questioning the old didactic model and moving towards something more interactive, student-led, and — whisper it — enjoyable. And I started wondering how well those arguments might be applied to the management of meetings in the workplace. Just as it’s a huge waste of resources to have students in class who are not actually learning (or who are doing so in functionally-limited ways), the cumulative workplace productivity that gets pissed away because the bodies in the room aren’t engaged doesn’t bear thinking about.

Disclaimer: I’m not exactly inventing the wheel, here. While I want to believe that many of you work in places where meetings are managed sensibly, I’m assured that there are plenty of workplaces in which meetings are still very much a problem. So if you do work somewhere where meetings are useful, if not actuallt enjoyable, then the rest of this post may not be for you — though I hope you’ll appreciate it as an intellectual exercise, if nothing else.

The person leading the session must add value. Historically, education has involved sitting passively and listening for an hour or two at a time while someone dispenses information, a sort of pre-digital iTunes U on highly degradable reel-to-reel tape. Clearly, in an era where most things worth knowing find their way onto the Internet, and students have to pay to attend university*, such behaviour is nuts: Nevertheless, there remains a population of educators whose idea of teaching is to read aloud from their slides. While I can’t substantiate or quantify this with reference to the literature, I have noticed that when people find out this is something I’m interested in, many of them are quick to tell me about this lecturer they had at university who used to read aloud from … you get the idea. Old-school models of what classes should look like still persist.

Likewise, workplace meetings of the kind where one person talks and everyone else listens are still alive and kicking. Seth Godin argues that disseminating information is a legitimate type of meeting, but I’m less and less sure of this as my time starts feeling increasingly precious. (Though maybe I’m just becoming increasingly precious ;-P). Just as there is a grassroots movement underway to try to rid education of the kind of ‘teaching’ that is really reading aloud, so we should be taking the same approach to eradicate broadcast-style meetings. Surely in both cases it would have been better to send round a document in advance, then take advantage of valuable face-time to have some sort of informed discussion?

Good session management means making sure everyone in the room understands why they are there. Devil’s advocates will by this point be arguing that not everyone reads documents that are sent around. Well, not everyone engages in information-dump meetings either. I mean, you can get me into the room and you can impose a no-laptop rule and whatever other sanctions you choose — but fundamentally, if I can’t see the point, I’m going to go off and be a tourist inside my own head, since that’s where all the really interesting stuff is happening. As educators, when we see this this disengagement happening in the classroom, we try to do something about it by emphasising to those in the room the relevance of what is being discussed. Sadly, I can count on the fingers of one hand the number of facilitators I’ve encountered who have run meetings in this way, ensuring everyone is really engaged and taking the time to draw out the more recalcitrant attendees. And I think that’s kind of a shame.

As group size increases, monitoring and remediating disengagement gets harder. I hypothesise that there’s a direct relationship between a facilitator’s skill and what size group they can wrangle at once without disengagement setting in. I had originally written that larger groups are fine for broadcast-style meetings — but actually, larger groups just encourage anonymity, diffusion of responsibility, and loafing. And anyway, if you you’re going to broadcast, why not circulate a video or document so people can watch or read it at a time that’s convenient for them? It’s worth considering the participant’s experience: small groups increase the potential for better-quality interactions between those present.

To keep people engaged, you have to sustain their attention. My most popular post on this site is When giving presentations, the only rule that matters is the rule of attention, and I’m pretty sure this whole argument applies to meetings too. If you don’t get people’s attention to start with, you won’t even get as far as being able to convince them of the relevance of what you are saying. But once you have their attention, you have to wrangle it, or it will just wander off again; attention is fickle. Moving things along every five, ten, or fifteen minutes will help; the brain is crazy for novelty.

Nevertheless, even an agenda won’t save you if each item on that agenda lasts for half an hour or more; even the most pertinent meetings can lose our attention if they go on too long. Here’s Seth Godin:

Understand that all problems are not the same. So why are your meetings? Does every issue deserve an hour? Why is there a default length?

Excepting the rule of attention, rules are a millstone. I’ve seen people discuss photocopying for half an hour, for no other reason than there was sufficient slack in the meeting schedule. Courtesy for other people’s time goes a long way: while this might be all you have to do today, the other person could be squeezing you in between studying, caring for an elderly relative, and working a part-time job. My nightmare is people who schedule one-to-one meetings lasting an hour or more to ‘chat’ about a single issue, with no plan or structure in mind. I mean, at least in a one-to-one tutorial, the ensuing discomfort could be offset by having some pre-prepared exercises to work through, giving the whole thing a bit of structure. Hey, there’s another tip from education: do the preparatory work — it’s a whole lot less excruciating for everyone concerned.

Rules do pervade education: parcelling up learning into arbitrarily-quantised chunks of 60 or 120 minutes is, objectively, pretty weird, when really what you’d like is to teach X until you are done teaching X, or until the students have run out of attention, then call a recess. But much as I find it hard to justify two-hour lectures, I understand that this rules-based architecture is driven by the practicalities of scheduling lecture theatre allocation across the whole campus, for a population of several thousand students, each of whom is pursuing one of a hundred or so different three-year degree courses. Suddenly, organising a one-hour meeting for seven people across different sections of your company doesn’t seem quite so bad, huh? ;o)

It’s worth distinguishing between ‘rules’ and ‘constraints’. By rules, I mean ‘hand-me-downs’: the things we do because the guy before us, or the guy before him, did them that way, and that we’re too lazy to change. Constraints are quite the opposite: these are deliberately-adopted restrictions designed to keep us on track and force us to be creative. Agendas, when adhered to, are one form of constraint; the curriculum can be another. There’s a whole organisational cult around the daily scrum meeting, which is short and time-limited and forces people to get to the point. I know people who work in teams that run a daily scrum, and from talking to them, it sounds excellent. However, it’s almost certainly less well-suited to academics, since the nature of our work means we’re mostly solitary, even when we are doing collaborative research — leaving aside that many of us don’t observe a standard 9-5, or have predictable hours day to day.

Two thoughts to finish with. First, as the estimable David Farbey pointed out at TCUK10,

“Team working is “I’ll do X, you do Y” — not circulating a document for everyone to read.”

And the second, which just scrolled past on Twitter right now (synchronicity or apophenia? It doesn’t really matter): Meetings aren’t work. They’re what we do as a penance for not rolling along like clockwork..

Postscript: Okay, there’s one other rule I like, too: the rule of two feet, as practiced at unconferences and barcamps. If, despite your best efforts, you’re not learning or contributing, go somewhere else where you can learn or contribute. I understand that this might be contentious (leave class? walk out of a meeting?), but I dare you to tell me that there’s never been a meeting, or a class, where the only thing stopping you from leaving was a vague sense of awkwardness that you ought to be there — and I happen to think it can be done gracefully, without being rude.

* Note for North Americans and others: until recently — the last decade or so — a university education in the UK was effectively free. Yes, really free, as in beer. Summary here; you can trace a lot of the bitterness in UK higher education from the moment that Tony Blair’s Labour government (yes, they’re the ones who’re supposed to be socialists) decided to turn universities into businesses. Important exception: Scotland, because it is awesome and now decides its own education funding policies, still does not charge Scottish students top-up fees. Pro tip for future students: be born in Scotland.

Advertisements

14 Comments

Filed under my stuff, other people's stuff

On success and reward in academia

So it’s been six months since I blogged here, which is frankly atrocious. Having said that, it doesn’t really feel like six months, because everything is whooshing past at such a rate (although interestingly, while we all like to agree that time is speeding up as we get older, the evidence for this is equivocal).

Anyway, time to fill the void. Hi, void. How are you?

VOID: HI, CHRIS. WHERE HAVE YOU BEEN?

I’m coming to that, but I need to tell you some stories on my way there.

VOID: OKAY. I LIKE STORIES.

(Aside: are you following @FEMINISTHULK on Twitter? You should; CAPS LOCK has never looked so attractive.)

This post is coming out of several conversations I’ve had recently about what it’s like to be an academic, and how academics spend their time. I think a lot of this is really not transparent to people who don’t work inside academia; a lot of the time, I don’t think it’s all that obvious to students, either.

First up, here’s Mark Changizi on why he just left academia:

You can’t write a grant proposal whose aim is to make a theoretical breakthrough.

“Dear National Science Foundation: I plan on scrawling hundreds of pages of notes, mostly hitting dead ends, until, in Year 4, I hit pay-dirt.”

Lots of research is by nature theoretical and speculative, the kind of thing you just need to chew on, indefinitely, until you make a breakthrough. But increasingly, funding bodies are turning away from this sort of thing in favour of applied research. Indeed, there’s a massive hoo-hah about HEFCE‘s new Research Excellence Framework (the thing that used to be the Research Assessment Exercise — that is, the attempt to objectively measure how “good” a university department’s research is) and exactly what they mean by ‘impact’.

It’s pretty hard for theoretical research to have impact. (I guess the clue is in the word ‘theoretical’.)

Mark again:

in what feels like no time at all, two decades have flown by, and (if you’re “lucky”) you’re the bread-winning star at your university and research discipline.

But success at that game meant you never had time to do the creative theoretical leaps you had once hoped to do. You were transformed by the contemporary academic system into an able grant-getter, and somewhere along the way lost sight of the more fundamental overthrower-of-dogma and idea-monger identity you once strived for.

Mark’s a theoretician, an absurdly talented one (I can’t even envy him for that, because he’s such a nice guy) — if anyone should be able to thrive within academia, it’s him. But he’s gone, because universities are changing from environments in which academics are free to consider ideas and theories into income-seeking machines.

Wait — you thought universities were about educating people? Well, keep reading, but you might want to be sitting down.

Mark’s experience is different from mine — he’s a theoretician, and I, after many years of not knowing how to describe what I do, have finally started calling myself an applied cognitive psychologist. (My mind is much better at applying theory to existing situations than it is at coming up with entirely new ideas about how the world works.) But what our experiences of academia have in common is that it’s hard to find anyone who will reward us for doing the things we do best, even when those things are ostensibly pillars of academia.

Example? Sure. Here are the things about my job that people senior to me notice whether I am doing:

* Authoring research papers (preferably in high-impact journals)
* Bringing in money through grant funding
* Bringing in money through other means (such as knowledge transfer or consultancy work)
* Attracting negative feedback from students
* Giving a class lower- or higher-than-average marks
* Completing the requisite admin tasks required for my teaching
* Meeting my personal development goals for the year
* Turning up to the relevant staff, admin and committee meetings

Here are some things about my job that nobody senior to me will notice whether I am doing unless something is on fire:

* Teaching well (unless I am namechecked by students right in front of them)
* Reviewing and revising my lecture notes from one year to the next
* Keeping up to date with developments in the theory and practice of teaching and learning
* Being involved in learning and teaching projects at a university-wide level
* Innovating in my teaching (and encouraging or helping others to innovate)

Above all, as I found myself explaining to an incredulous American friend last week, it is pretty much impossible to get promoted on the basis of being a stellar university teacher. I don’t actually think I’m a stellar teacher — but what I’m saying is, there’s no real incentive even to try, because all I’m doing, in striving for teaching excellence, is making work for myself: not only do I have to try to squeeze all this teaching innovation in, I also have to find time to do and write up my ‘real’ research.

So what have I been doing since February? I can’t believe I’m about to type this, but here goes:

leaving the office on time, and going to the gym.

This would be the bit where I proudly announce that I now have a life, right? But actually? I’m exhausted. And it’s not from going to the gym. I’m exhausted because it’s nearly impossible to do my job inside contracted hours if you care about teaching quality. Or if you have many research projects on the go that might one day lead to publications; I have about five of these, and they eat up time and money with no guarantee that the results will ever be publishable, assuming I even have the time and energy to write them up.

teaching vs research time.png

(Disclaimer: the above graph is purely conceptual, being based on no data whatsoever, but I think most academics would recognise it.)

Did you know that academics are estimated to work somewhere in the region of 50 hours a week? Why? Well, as I can now attest from personal experience, it’s the only way they can get anything done.

So where have I been? Mostly, trying not to have a breakdown. Trying to balance having a life with conducting teaching and research to a high standard. Trying to find a balance between using the summer to write up my research findings and taking the vacation time I’m owed (and which I never have time to take during term, because, hello, teaching and admin). Trying to rationalise what I can do, and what I can’t. Practicing saying ‘no’.

It is hard. And the students are back in just over a month and I do not feel rested at all, and I haven’t done half the work I hoped to. And last summer was exactly the same.

So, void, that’s where I’ve been. Interesting times.

It’s not all doom and gloom. I’m learning things about myself, like for instance that I’m a ninja copy-editor — when you give me your poorly-written paper to co-author, I will turn it into something sublime, geeking out for hours while my fourth cup of coffee in a row goes cold. (Now I just need to figure out how to work this way with all my co-authors.) I’ve embarked on a big e-learning project, more about which soon. And I’m slowly getting more clarity about what I want and don’t want in my job. These are all good things.

And the gym? I’ll definitely keep going to the gym. Being fit is great, but more importantly, you should sponsor me to run a half-marathon for charity :)

Thanks for listening; it’s nice to be back.

19 Comments

Filed under my stuff, other people's stuff

The search for context in education and journalism (wicked problems, Wikipedia, and the rise of the info-ferret)

It’s a January evening, a schoolnight, and I’m sitting on my sofa thinking Stuff it. I’m tired and it’s dark and I worked hard today, damn it. It’s pretty hard, at that moment, to engage with with the things I know are really good for me, like going to the gym, eating right, and engaging with decent journalism that actually says something worthwhile about the state of the world.

Ah, journalism. Why is it so hard to engage with good, wholesome news? You know, instead of the junk-food variety?

Well, for starters, it takes effort; something in short supply when you consider that UK academics apparently rack up an average 55-hour working week. So if I sometimes choose entertainment over learning, maybe it’s because I’ve been thinking really hard for 11 hours already.

Here’s the more interesting question, though: why should it take so much effort to engage with the news? I think the record will show that I did okay in school and that I know a few long words. I can follow an argument; on a good day, I can win one. But watching or reading the news and really, really getting it (not just at the who-shot-whom level, but understanding why), frequently eludes me.

For the longest time, whenever I read the news, I’ve often felt the depressing sensation of lacking the background I need to understand the stories that seem truly important.

I didn’t write that, but I could have. By the time I’d got old enough to be properly interested in the ongoing story that is Northern Ireland, no newspaper was interested in explaining the context to me. I knew it had to do with territory, nationality and religious differences, but who were ‘republicans’? What did they want? The newspapers all assumed that I knew a whole bunch of stuff that actually, I didn’t know. The dictionary was no real help, the Internet was still in short trousers, and Wikipedia didn’t yet exist. (Not that we had the Internet at home. We didn’t even have a computer.) And I was at that delicate age where I didn’t want to look stupid by asking what might have been a dumb question. (Actually, it wasn’t a dumb question at all, but I didn’t know that then.)

We would shy away from stories that seemed to require a years-long familiarity with the news and incline instead toward ephemeral stories that didn’t take much background to understand—crime news, sports updates, celebrity gossip. This approach gave us plenty to talk about with friends, but I sensed it left us deprived of a broader understanding of a range of important issues that affect us without our knowing.

Secret-that’s-not-really-a-secret: the guy who wrote this is a journalist. His name is Matt Newman, and he’s reporting here for Harvard’s Nieman Foundation about how modern journalism bypasses context in favour of immediate, juicy details..

News is complicated. To make sense of complicated things, we need context. And the newspapers aren’t delivering that context; even journalists say so.

In fairness, context is hard to come by when — as with Northern Ireland — your story pretty much defines the phrase wicked problem (see also its big brother, The Middle East). How much information is ‘enough’? How much background would you need to really understand the issues surrounding Obama’s healthcare reforms? Or the debate on university fees?

We need something, and traditional news media aren’t providing it.

But we have Google and Wikipedia, right? So there’s really no excuse for not being able to find out at least something about nearly everything. Apparently, when a big news story breaks, people really do converge on Wikipedia, looking for context; we are a generation empowered, as no generation before us, to find stuff out.

Except.

Except that I still get emails from my students that read What does [word from the course materials] mean? I used to write lots of replies of the biting-my-tongue variety, politely suggesting that the student take advantage of the resources at their disposal1, but eventually I got fed up with this, and wrote an FAQ in which I was somewhat more blunt, though I hope in a kind way.

My favourite was a student who emailed me after a deadline, apologising for the poor quality of the coursework he had submitted, and explaining that he hadn’t known what one of the words in the essay question meant — so he had just tried his best and hoped. This wasn’t a word that was archaic or obscure. This was a word widely employed in psychology and related subjects. It’s not in the paper dictionary on my desk (which, admittedly, is 20 years old), but it’s very, very easy to find and learn about online.

It’s not about having access to the information; all my students have Internet access at least some of the time. Too many (N > 0) of my students are just not in the habit of looking for information when they get stuck, like someone forgot to tell them that the Internet is good for more than just email and Facebook.

But students will surf Wikipedia and YouTube all day long, given half a chance, so what’s that about?

At Playful ’09, Tassos Stevens talked about the power of indeterminacy, and whether, if someone throws a ball, you can look away before you find out if the other guy catches it. Suspense is immensely engaging.

Wikipedia is like this: it’s a barely game, where the idea is to answer as many “Ooh, what does that mean?” questions as possible, using only the links from one article to the next. In suspensefulness terms, Wikipedia is an infinite succession of ball-throws, sort of Hitchcock: The Basketball Years. (Okay, so Tassos was talking about cricket, but my point stands.)

But education obviously doesn’t feel like a barely game, because students don’t behave there like they do when they’re surfing Wikipedia. So I guess we need more suspense. This might mean being less didactic, and asking more questions. Preferably messy ones, with no right answers.

I think that if we really want to turn our students into information ferrets, running up the trouserlegs of their canonical texts to see what goodness might be lurking there in the dark [This metaphor is making me uncomfortable — Ed.] then maybe we, like the news media, need to get better at providing context.

If students email me with simple queries rather than trying to figure things out on their own, maybe it’s because the education system hasn’t been feeding their inner info-ferrets. (Note to schools: teaching kids how to google is a completely different issue from teaching them to google and making it into a habit, and some days, it feels like you only deal in the former.)

We exist, currently, on the cusp: everything’s supposed to be interactive, but not everyone’s got their heads around this yet. (“Wait — you mean we’re supposed to participate? Actively??”) The old-school, didactic models of education and journalism (“sit down, shut up and listen; we know best”) are crumbling. And some of the solutions about how to fix journalism look a lot like the arguments being rehearsed in education about how to make it valuable and keep it relevant: develop rich content that your customers can help build and be part of; accept that you might need a model which permits the existence of premium and budget customers. (This is going to be highly contentious in higher education, and I still don’t know what I think about it. But I don’t think the idea is going away anytime soon.)

I ran one of the many iterations of this post past Simon Bostock and he wrote back: Newspapers have learned the wrong lesson of attentionomics. I think they’ve got it bang-on as far as briefly grabbing our attention goes,2 but I don’t think it’s doing much for our understanding of the news, and some days, I worry that education is headed the same way.

Jason Fry asks, if we were starting today, would we do this? This is a great question for journalism, but it’s also pretty pertinent to education: we still teach students in ways that make only marginal concessions to the Internet’s existence, treating it as little more than a dictionary, encyclopedia, or storage container.

Given that nearly anything can be found with a few keystrokes, if we had to redesign education from scratch, what would it look like?

More like Wikipedia. More ferret-friendly. And maybe upside-down.

.

[Acknowledgements: major kudos to Simon for linking to Ed Yong’s great piece on breaking the inverted pyramid in news reporting, for reading drafts of this post while I was losing my mind, and for the juicy, lowbrow goodness of LMGTFY, below.]

1 I suppose I could slam my students with Let Me Google That For You, but I prefer to save the passive-aggressive stuff for my nearest and dearest.

2 If this post were a headline, it would read STUDENTS TOO LAZY TO GOOGLE. (Admittedly this would be closely followed by SUB-EDITOR TOO DRUNK TO CRAFT ORIGINAL HEADLINE and BLOGGER CHEERFULLY IGNORES CLICHÉ.)

21 Comments

Filed under my stuff, other people's stuff

Why experts are morons: a recipe for academic success

This morning there was quite a bit of tweeting, back and forth, about this article and exactly how stupid it is.

“If our attention span constricts to the point where we can only take information in 140-character sentences, then that doesn’t bode too well for our future,” said Dr. Elias Aboujaoude, director of Stanford University’s Impulse Control Disorders Clinic at Stanford University.

Yup, you read that right. Some guy with a Ph.D. who works at one of the best universities in the world (and who’s sufficiently good at his job that they made him director of a clinic) is talking — to all appearances quite seriously — about the idea that the human attention span might shrink to the length of a tweet.

In other news, if the world were made of custard, global warming might lead to major dessertification, if we could just just bake an apple crumble big enough.

Maybe there’s a good explanation. Maybe Dr Aboujaoude’s remarks were taken out of context by the San Francisco Chronicle. Or maybe they threw him this ad absurdum scenario and he ran with it because he’s a nice guy and thinks that even if reporters pose a dumb question, it would still be rude to call them on it.

Here’s my ill-conceived, half-baked thesis for the day: experts are morons.

Why? Well, we get very excited over stuff we think is new, because we’ve been too busy down in our own little silos. I pissed Harvey off earlier by posting, in good faith, a link to Tyler Cowen’s TED talk about the dangerous appeal of stories.

Kids, don’t even try to sell Harvey old rope. Even if you didn’t know it was old rope. He’ll know.

What I ended up saying to Harvey was essentially Look, there’s a movement afoot to try to to get storytelling back into learning, to replace the content firehosing that passes for big education these days, McDonalds-style — and this talk serves as a useful reminder that stories are invariably a gross oversimplification of the evidence.

What I should have been saying was: Dude, I spent umpteen years becoming a subject matter expert, and at no point did anyone tell me that I needed to apply my critical faculties to delivering the material I researched so painstakingly. I’m new at this; cut me some slack!

(It turns out that Harvey and I were somewhat at cross-purposes; such are the limitations of 140-character ‘discussion’.)

Here’s the thing: academic success favours those who focus their critical faculties on developing their subject area expertise.

Below is a recipe for modest success in academic life and for becoming a legitimate ‘expert’. (Quantities and ingredients may vary according to your needs and experience.)

  • You need to be bright-ish. Not supernova bright, just bright enough. (If you’re too bright in school, you’ll get bored; see next point.)
  • You need to be well-behaved. (If you don’t behave, you’ll be labelled disruptive and that will do exactly what you think it will to your chances of academic success. Yes, even if you are bored because lessons are too easy.)
  • It helps to crave everyone’s approval. (If you don’t care what your teachers or parents think, why would you try hard on subjects that don’t really interest you?)
  • Questioning authority probably isn’t in your nature. (Or if it is, it’s a very specific kind of critical thinking, like “hey, maybe nukes aren’t that great an idea, mmkay?”) This will serve you well later, in your tenured career.
  • You are comfortable letting other people set goals for you (“You think I should go to university? Great!”)
  • You acquire a certain nerd-like pleasure (flow, if you like) from gnawing on very specific questions.
  • Your school years have conditioned you to understand that most people are mean, and best avoided.
  • Metaphorically or actually, you have let a thousand cups of tea go cold while you geek out on your chosen subject.
  • … okay, that much will get you through university and into a postgraduate programme (Masters or Ph.D.) At this point, it will be particular helpful if you can screen out information about the world around you, because this will just distract and confuse you about the relevance of what you are doing. (Having a crisis of meaning is one of the fundamental stages of doing a Ph.D.)

    If you survive this process and get your doctorate, you enter the world of teaching, admin, research, publication, and grant-getting — listed in increasing order of importance to your new employer. Your Ph.D., the entry requirement to academia that you have worked so hard on, also serves as your passport to teaching. Pause a moment to reflect on the weirdness of that statement: subject expertise is used as a measure of how competent you are to communicate that information meaningfully to non-experts.

    (Some universities, mine included, are trying to address this systemic shortcoming by getting new lecturers to do a teaching certificate. This is a lot better than nothing, but it’s also quite possible to do the absolute minimum required to pass, then go on your merry way, unmoved and largely unchanged. At least we do ‘peer observation’, which is a nice way of seeing what other people are up to; it’s hard not to reflect on your own teaching when watching someone else deliver a session.)

    Once you’re on the big shiny merry-go-round of teaching-admin-research-publication-grant-getting, it’s even harder to drag your ass out of the mire of just trying to keep up with your subject area and across the road into the big field of flowers that is good educational practice. And when you do manage to haul yourself over there (at the cost, by the way, of time spent on research/publication/grant application — and no-one is going to reward you for that choice), you get disproportionately excited when people show you some of the shiny things that exist in the world, because you’ve been far, far too busy becoming a subject expert to notice them. This can make educators look like big, dumb puppies — for example when we’re over-keen to co-opt neuroscience.

    The other side-effect of being an ‘expert’ is that if you’re not naturally inclined to cause trouble, question the system, or think critically about more than subject-matter problems (and remember, you have floated to the top of an educational system that rewards exactly those qualities), then sometimes you end up saying really dumb stuff, because you’re too busy thinking “ooh, that would be interesting” — like what if we really could only take in 140 characters’ worth of stuff before our attention drifted — to fully consider the validity of the question.

    None of this is an excuse for living up to the ‘woolly professor’ stereotype, but I hope it helps to explain to people like Harvey why experts sometimes sound like they’re rediscovering — or even reinventing — the wheel. And as for us ‘experts’ (and boy, am I uncomfortable with that label) we need to try harder to think about the practical applications of what we do — and to remember, once in a while, to apply those finely-honed critical thinking skills to something other than our own subject areas. We’re not really morons, but to the casual observer, it’s an easy mistake to make.

    .

    Obligatory afterword: there are a number of stellar educators who really do manage to apply their critical faculties to more than just their own subject area, and who manage to get through university and postgraduate qualifications despite asking really awkward questions and rocking the boat. If they ever isolate a gene for that, we should all go get spliced right away.

    16 Comments

    Filed under my stuff, other people's stuff

    Stealing From Geeks, Part 2: Educators need to geek out, big time

    Other people’s presentation slides used to drive me crazy. “You’ve got Arial and Times New Roman and fifteen lines of text in 14-point font! Those colours are hideous! Stop with the serif fonts already! Are you going to read aloud every point?”

    Then I gave up caffeine.

    No, really — about two years ago, a casual conversation with my colleague Andy about minimalist slide design in teaching suddenly sat up and grew legs. We went from idle discussion to brainstorming ideas to me going home over Christmas wondering if I would get my brain to slow down to less than 1,000rpm. We managed to secure funding from the Centre for Research-Informed Teaching, and for the last 18 months, we’ve been exploring the effects of using minimalist slide presentations on people’s memory for information. I blog about it, think about it, and chase down ideas that might relate to it. I have even — *shudder* — acquired new skills to pursue it.

    In short, I have well and truly geeked out over my research. And it feels great.

    I posted last(ish) time about how education can learn from the technology sector by growing its own storytellers and role models, but I think there’s plenty more to take away from the home of geek, starting with trying to become one.

    Here’s the thing they don’t tell you in school: your inner geek is the most powerful learning resource you will ever have. It’s the thing keeping you at your computer or from putting down your book until well past bedtime; the thing needling you with “Hey, that’s interesting …” It holds your attention when you’re unfocused; delights or enrages you in the face of apathy or exhaustion. Your inner geek won’t rest until it consumes you in the fire of your own attention.

    Harness this awesome power, and you can do nearly anything you want: a geek illuminated from within by the source of their own geeky pleasure is one of the brightest lights in the universe.

    Geek, should you need to know how to get there, is basically a place where your interests and your strengths meet:

    your geek space.png

    (And since we’re on a Venn diagram jag, why not check whether you’re a dweeb, a geek, a nerd, or a dork?)

    Getting in touch with your inner geek is the fast track to achievement. Over the last two years, I’ve worked harder than I ever worked in my life — yes, even during my Ph.D. — and I’ve loved every minute. Hard work isn’t all that hard if it’s doing something you love. I also got to take our work to conferences in San Francisco and Corfu; being a geek comes with some pretty cool perks. (Okay, so I also got to go to Milton Keynes. This was a useful exercise in humility.)

    Geeking out provides students with good role models, giving them permission to indulge their own intellect and curiosity. Show me a good educator, and I’ll show you someone whose teaching involves some variation on “Hey, look at this — isn’t that cool?” Students need to see that geeking out can lead to rewarding careers. Adam Savage and Jamie Hyneman of Mythbusters have become poster-boys for scientific curiosity, but they also get invited to the Emmys. I want to give them both a big hug for making being a geek cool; the cooler being curious and knowledgeable becomes, the easier it will be for students everywhere to own their inner geek and move forward in the world.

    Education can help shape a culture in which geeking out is not just socially acceptable, but actually desirable. One of the big lies often peddled about geeks is that we’re happiest alone. I don’t think that’s true: the internet in its current form basically exists because geeks liked talking to other geeks. (Or at least reading about them from a safe distance.) When geeks hook up and reinforce their shared geekiness, amazing things happen. You see this in academic departments and at conferences where conversations blossom into full-on nerdouts as two or more people realise they have an interest in common, often kicking off with “Hey, do you know if … ?” It happened to me; you wouldn’t be reading this if it hadn’t.

    Most technological developments of the last two decades (centuries? millennia?) were created by geeks who didn’t care whether people knew they were smart; who didn’t worry about looking cool, because they were too busy chasing down their idea. Education needs to reclaim that indifference to what’s “cool” and set about showing that growing and following a passion is one of the most rewarding — and genuinely cool — things you can do.

    We don’t geek out enough; we certainly don’t let our students see us geeking out enough. Understanding and enjoying focused obsession is far too good a thing to keep all to ourselves.

    Geek out, and don’t look back.

    5 Comments

    Filed under my stuff, other people's stuff

    Retweet Culture

    This week, my Twitterstream brought me the very wonderful Little People art project, so I retweeted the link.

    Then I get a message from Harvey: One of my favourites. But hey, I already sent you that link, after our first ever meeting. And you liked it.

    This is actually pretty funny, because Harvey and I have been chatting about how everything is being ‘re-found’ and retweeted instead of people actually making new stuff*. Ooh, new thing! Pass it on. Ooh, new thing! And because it’s interesting, we do pass it on, and so do others. BOOM — information explosion.

    And because there is just so much information out there, everything old is new again. It’s like those chain-letter emails you get from your mum, warning you about something that everyone else on the Internet knew was a hoax like six years ago. You’d think that everyone would know by now and nobody would press the FWD button, but no, here it comes again, that one about how if you don’t forward this to five friends RIGHT NOW, Barack Obama will come over there and saw the legs off your hamster.

    At best, rapid circulation of ideas can be massively stimulating: I find it exciting to be bombarded by quality content that makes me think about my teaching; exciting, and sometimes even inspirational. But there’s a danger that our culture is so obsessed with the next new thing that we are in danger of losing our appreciation of depth. If you want to be shallow in your leisure hours, who cares, right? But it’s switching off that mindset that’s hard, and I think we need to be wary of anything that precludes in-depth analysis or reduces our capacity for critical thinking. Look at the shiny shiny! [video; contains language NSFW].

    Whether resources like Twitter actually contribute to our alledged attentional decline is open to debate. A much-cited study this week purported to show that Twitter, text-messaging and YouTube don’t stretch your working memory the way Facebook use can. From the Telegraph’s coverage:

    [The study’s author, Dr Tracy Alloway] said there was evidence linking TV viewing with Attention Deficit Hyperactivity Disorder (ADHD) while extensive texting was associated with lower IQ scores.

    To be honest, I’d feel a lot happier if this whole story didn’t smack of correlational data being interpreted as causal in yet another attempt to show how society as we know it is circling the drain. Sure, I imagine if you use Twitter for nothing but exchanging 140-character messages, then it probably isn’t giving your brain the full workout. But what about those of us who use Twitter to pass along information about longer articles? I’ve read 10,000-word articles linked to from Twitter in a single sitting. Again, it’s all about how you use the software, a nuance that seems to escape the mainstream media most of the time. I really think that networking culture of the kind fostered by Twitter is a potential goldmine: there’s something there for everybody, and knowledge flow within a network is the future of training and education.

    But we do need to consider whether the constant tweeting and retweeting of information might erode the time people used to spend making stuff. To avoid this, I think we need to get serious about blocking out time away from the infostream. There’s a huge temptation, if the tap is always running, to keep holding a bucket under it, but I think that way leads to madness. Step away from the tap and do stuff, don’t just punt ideas around. Otherwise you’re not an expert, you’re a dilettante (Trust that link and stick with it; it’s a good ‘un. Again, probably contains language NSFW.)

    For me, the hardest part is finding the right balance between being stimulated by retweet culture into creating new stuff, and spending enough time away from it to actually do the creating. We’re going to have to move to a way of thinking in which infostream management is taught in schools; at the moment, most taught skills focus on how to find what you need, but I suspect that increasingly, what people will really want/need to know is how to manage the flood.

    I saw a great tweet recently, but of course I can’t remember who it was, now (if you know, please tell me so I can attribute it appropriately). Someone was showing Twitter to their mother. The mother looked at it and said, But how do you make it stop?

    .

    * Only this week, my brain re-found the term attentionomics. Of course, I didn’t coin it; a quick google will show that I am not in any way the first person to identify this term. Nevertheless, I am going to start using it when explaining what I do, because it’s a good word.

    4 Comments

    Filed under my stuff, other people's stuff

    Teaching naked in the age of big education

    PowerPoint is currently making headlines in education, though probably not for the reasons Microsoft would like.

    The Chronicle of Higher Education reports that José Bowen of Southern Methodist University has banished computers from his classrooms:

    Mr. Bowen wants to discourage professors from using PowerPoint, because they often lean on the slide-display program as a crutch rather using it as a creative tool. Class time should be reserved for discussion, he contends, especially now that students can download lectures online and find libraries of information on the Web.

    That same article in the Chronicle cites research conducted by my colleague Sandi Mann, showing that many students find lectures boring, and that the most commonly-cited reason for this is use of PowerPoint.

    So perhaps ‘teaching naked’ (sans PowerPoint, gentle reader) might cure students’ boredom and encourage instructors to write more creative, interactive classes?

    Well, while I’m curious to know where José Bowen’s nebulous ‘often’ statistic comes from, it may be true that lack of confidence encourages instructors to rely more on slides: in a recent pilot study, Andy Morley and I found that of the university lecturers we surveyed, 91% said that since starting teaching, they had reduced the amount of text they used on their slides. We interpret this to mean that increased practice leads to increased confidence; the more comfortable you are with your subject, the less material you ‘need’ on the slide. However, it’s still a big leap from there to asserting that instructors routinely use slides “as a crutch”; there are plenty of other reasons they might choose to use slides, something Mr Bowen apparently chooses to overlook.

    There are really two issues in play here: taking slides out of the classroom, and making higher education more interactive. They’re kind of all twisted up together, so here are my thoughts about teaching naked, and why student engagement and class size present such a knotty problem in this era of massification and McDonaldization in HE.

    1. Large class sizes turn higher education into a broadcast medium

    Maybe José Bowen only teaches small classes. If so, he is very fortunate, because small-group teaching is brilliant. It allows instructors to get to know their students and allows students to engage, make mistakes, and ask questions in a relatively low-pressure environment.

    But try getting students to do these things surrounded by 300 of their peers — it’s like pulling teeth. Not to mention that you need a decent pair of lungs, or a microphone, to maintain order. On this scale, education is pretty much a broadcast medium, and there’s not much you can do about it except ensure that, when you are talking (which really shouldn’t be all the time), you have appropriate visual aids, since we know these benefit learning.

    So no, teaching naked is not necessarily the best thing to do when you have really big classes, as many of us do. It might be appropriate, but then you also need to consider that:

    2. Teaching naked is more suitable to some subject areas than others

    Some of my colleagues teach slideless, and their lectures are enduringly popular, seemingly undiminished by the absence of visual aids. To take one example, material in social psychology is rarely inherently visual; what’s important is the ability to spin a decent yarn, and I am glad to know and work with people who exemplify this approach.

    But when I give lectures (remember, 300 students) on neuroanatomy or the visual system, I show diagrams, because then students can see what I am talking about. I could, of course, describe the brain’s visual pathways in excruciating detail, but students would soon be adrift in a sea of unfamiliar anatomical terminology, and I expect my lectures would be bitterly unpopular. Why add unnecessarily to the lecture’s extraneous cognitive load? Writers everywhere know the answer: show, don’t tell.

    Of course, I don’t have to use slides as my visual aids, but they’re a highly visible medium that I know I can use well in large classes, so I use ’em. (Your mileage may vary.) But this then throws up a whole new problem:

    3. Students expect that their classes are about information delivery.

    Slides have become a big part of this expectation. Yes folks, we’re damned if we do and damned if we don’t: students have been known to complain when staff don’t use lecture slides, or don’t make them available. There are probably many reasons for this (ease of note-taking, knowing how to spell technical terminology, zoning out and missing something critical, or missing the entire lecture and needing a frame of reference — and no doubt there are plenty more), but I think they all boil down to the importance of possessing information.

    Implicit in this delivery model of education is the suggestion that students are passive vessels into which learning is transferred via their attendance at lectures, a situation which may be exacerbated by use of slide-based handouts. The Chronicle notes that:

    The biggest resistance to Mr. Bowen’s ideas has come from students, some of whom have groused about taking a more active role during those 50-minute class periods.

    Of course, if students are to take a more active role than sitting in lectures, this requires that they have done some reading around the topic. But getting students to do even basic reading prior to class isn’t that straightforward; for one thing, since the introduction of tuition fees, many students now hold down part-time or even full-time jobs to pay their way through university. I have known students choose modules on the basis of what will fit around work, rather than their own academic interests, and I have found out the hard way that even when you say “this prior reading is mandatory for the session”, you either quickly reduce the number of people coming to that class, or end up adapting the session to accommodate those students who have not, despite your advice, done the reading. So here too, ‘teaching naked’, if we take that to mean ‘facilitating student discussions of material they have read outside class, in the absence of computers or other overt delivery methods’, might not work well.

    So, should I kick computers out of my lecture theatre?

    My honest feeling is that that teaching naked, much as I admire the principle, isn’t always compatible with how big higher education actually functions. We do have small-group teaching, and we try to keep that as interactive as possible, but plenty of our teaching is still lecture-based, and I think it’s a mistake to rejected computers (and slideware) out of hand, no matter how cool it is to diss PowerPoint right now*.

    Fundamentally, it’s dogmatic to apply any hardline approach, whether that be ‘no slides’ or ‘slides all the way’. Educators are supposed to be smart — so let’s act like it.

    * Actually, it’s been cool for quite a while. Lincoln took some stick about the Gettysburg Address and it all sort of snowballed from there.

    11 Comments

    Filed under my stuff, other people's stuff