Tag Archives: statistics

Presenter non grata: are custom slide animations the new PowerPoint?

Way back when, presentation slides were the best thing since sliced bread. Nowadays, we have Death by PowerPoint, with slideware being blamed for uninspiring presentations and comatose students, and generally derided as the root of all evil. But now there’s a whole new threat: custom slide animations.

There’s been a lot of noise this week about a new journal paper by Mahar et al (2009, in press), initially picked up by Science Daily, claiming that custom PowerPoint animations could be detrimental to learning.

To summarise the experimental design: the authors used either static screenshots or custom animated slides backed with identical audio narratives to teach some basic concepts in secure computing, testing students’ knowledge/understanding before and after they viewed the presentation. The non-animated version had all visual prompts (screenshots/signals/bullet-points) visible at once, with the voiceover addressing each in turn. The animated version had each item appear in turn as it was discussed (screenshot components and bullet-points), though it’s worth noting that the bullet-points on a given slide didn’t disappear once they had been narrated.

The SD article itself doesn’t give all that much away, but Olivia Mitchell did some seriously high-quality digging and managed to acquire from the authors some samples of the materials used, and basic figures showing that students’ correct answers in the static condition rose from 38.4% before to 82% following instruction, compared to the animated condition, in which students’ scored 71.4% correct. Olivia, because she is awesome, also addresses the study’s results in the context of cognitive load theory: you should go read her posts.

Ars Technica also weighed in, providing some more details — and a note of caution — about whether animation made things worse:

Both presentations dramatically improved the students’ scores, which were a bit below 40 percent correct in the first administration of the quiz. But the animated presentation brought scores up to 71 percent, while the animation-free version got them to 82 percent. Of the nine questions, only one saw the animated group outperform their static peers.

[… ] Animations that are intended to increase focus can be just as distracting. Note the “can” in that sentence, however — the differences between the scores of the two groups ranged from insignificant to nearly 25 percent, so it’s clear that animation isn’t uniformly harmful to learning, a point the authors themselves note in the discussion.

(Love that balanced reporting, by the way)

What I find frustrating here is that nobody is talking statistics: while a difference of around 10% sounds impressive, it could conceivably be non-significant; I’m twitching, waiting for the article to arrive via inter-library loans, so I can see what statistical tests the authors ran.

The other thing making me crazy is that I don’t know exactly how students’ recall or understanding of the information was tested. The Science Daily post says:

[the authors] … tested the students recall and comprehension of the lecture.
The team found a marked difference in average student performance, with those seeing the non-animated lecture performing much better in the tests than those who watched the animated lecture. Students were able to recall details of the static graphics much better.

Recall and comprehension are quite different beasts. Even just testing basic recall is complicated: do you use multiple-choice question (MCQ) -style responses, or get students to write down an answer based on their own, unprompted recall of the information? That distinction might sound pedantic, but it’s pretty vital: it’s easy to spot the right answer among distractors in MCQs, just based on familiarity, but to generate the correct answer yourself with no prompts requires that you have actually internalised the information; this distinction forms the basis of the remember-know paradigm. And that’s before we get into the nitpicking of ‘recall’ versus ‘comprehension’ …

So far, early research conducted with my colleagues Andy Morley and Melanie Pitchford suggests that recognition of the correct answer based on familiarity isn’t affected, but unprompted recall gets worse under conditions of high cognitive load. So I’ll be fascinated to read what Stephen Mahar et al have found, and whether it’s consistent with our results.

As to whether custom animation might be “bad”, I’m still pretty cautious. John Sweller, the de facto king of Cognitive Load theory, is on the record (for example in Presentation Zen) as being highly critical of PowerPoint, but I’d argue that this is an oversimplification: it’s all about how we use the technology. Slideware*, when used sensibly — i.e. with an eye on cognitive load, design aesthetic, and audience involvement — can be a brilliant tool for learning; I’d love to see a study in which custom animation can be shown to actively contribute to learning, perhaps through more minimalist slide content than that used in the study by Mahar and colleagues.

John Timmer at Ars Technica rightly points out that after slideware hit the classroom, it was a long time before anyone thought to ask whether it was the right tool for the job. I don’t think that’s an unusual response (“Hey! Shiny new technology! Let’s use it … because it’s shiny!”) but I think now that we have a culture of researching instruction, the onus is on educators to demonstrate that the tools they are using are good ones, rather than just being technological magpies. I have no doubt that slideware can be a great teaching tool; it’s up to us to find ways of using it that enhance, rather than detract from, the learning experience.

(By the way, if anyone wants to send me the full article by Mahar et al., my contact details are here, and I’d be much obliged!) Thank you Olivia! Much appreciated.

* Gotta love how it’s never “death by Keynote” :)

Mahar, S. et al (?) (2009). The dark side of custom animation. International Journal of Innovation and Learning, 6, 581-592.

5 Comments

Filed under my stuff, other people's stuff

Being clear about uncertainty

The Guardian reports Professor Dylan Williams as saying that exam results are unreliable:

“People who manage and produce tests have a responsibility to be honest about the margins of error and report them. By pretending exam results are completely reliable, we have encouraged people to rely more on them.”

This is not really news to anyone in education, but may shock students and parents who, I’m sure, would like to think that we always get it right. And we should be striving to improve the system, because, well, it should be as fair as we can possibly make it.

But wait a minute …

“By pretending exam results are completely reliable …”

Who’s pretending? The Institute of Education? Schools?

Well, maybe. No-one likes the appearance of being unfair. But I think there’s another party here, too: the media.

The message of this news story is that the system is not perfect. But of course, no real, living, breathing system will ever be perfect! There will always be exceptions, and in any system involving measurement, there will be a margin of error — except that this is hardly ever reported in the UK media. Here, on a good day, you get sampling information.

It’s a different story in North America: there, it’s routine to find statistical data, such as polls measuring political approval ratings or voting intentions, accompanied by information — often quite detailed — about the margin of error. And they take it pretty seriously, too.

When I read these stories in the New York Times or Globe And Mail it makes me feel like we’re statistically illiterate in this country.

I’m not saying this is all the media’s fault – we could do way more to ensure statistical literacy while people are still in school. But maybe reporters should try including information about margin of error anyway. I’m thinking that even a vague awareness among the general public that there is some uncertainty about the results of any statistical exercise would be better than unthinking acceptance of whatever numbers emerge. What’s the worst that can happen?

(PS – I’m reminded that polls are the worst way of measuring public opinion and public behaviour — except for all the others ;o)

Leave a comment

Filed under other people's stuff