Way back when, presentation slides were the best thing since sliced bread. Nowadays, we have Death by PowerPoint, with slideware being blamed for uninspiring presentations and comatose students, and generally derided as the root of all evil. But now there’s a whole new threat: custom slide animations.
There’s been a lot of noise this week about a new journal paper by Mahar et al (2009, in press), initially picked up by Science Daily, claiming that custom PowerPoint animations could be detrimental to learning.
To summarise the experimental design: the authors used either static screenshots or custom animated slides backed with identical audio narratives to teach some basic concepts in secure computing, testing students’ knowledge/understanding before and after they viewed the presentation. The non-animated version had all visual prompts (screenshots/signals/bullet-points) visible at once, with the voiceover addressing each in turn. The animated version had each item appear in turn as it was discussed (screenshot components and bullet-points), though it’s worth noting that the bullet-points on a given slide didn’t disappear once they had been narrated.
The SD article itself doesn’t give all that much away, but Olivia Mitchell did some seriously high-quality digging and managed to acquire from the authors some samples of the materials used, and basic figures showing that students’ correct answers in the static condition rose from 38.4% before to 82% following instruction, compared to the animated condition, in which students’ scored 71.4% correct. Olivia, because she is awesome, also addresses the study’s results in the context of cognitive load theory: you should go read her posts.
Ars Technica also weighed in, providing some more details — and a note of caution — about whether animation made things worse:
Both presentations dramatically improved the students’ scores, which were a bit below 40 percent correct in the first administration of the quiz. But the animated presentation brought scores up to 71 percent, while the animation-free version got them to 82 percent. Of the nine questions, only one saw the animated group outperform their static peers.
[... ] Animations that are intended to increase focus can be just as distracting. Note the “can” in that sentence, however — the differences between the scores of the two groups ranged from insignificant to nearly 25 percent, so it’s clear that animation isn’t uniformly harmful to learning, a point the authors themselves note in the discussion.
(Love that balanced reporting, by the way)
What I find frustrating here is that nobody is talking statistics: while a difference of around 10% sounds impressive, it could conceivably be non-significant; I’m twitching, waiting for the article to arrive via inter-library loans, so I can see what statistical tests the authors ran.
The other thing making me crazy is that I don’t know exactly how students’ recall or understanding of the information was tested. The Science Daily post says:
[the authors] … tested the students recall and comprehension of the lecture.
The team found a marked difference in average student performance, with those seeing the non-animated lecture performing much better in the tests than those who watched the animated lecture. Students were able to recall details of the static graphics much better.
Recall and comprehension are quite different beasts. Even just testing basic recall is complicated: do you use multiple-choice question (MCQ) -style responses, or get students to write down an answer based on their own, unprompted recall of the information? That distinction might sound pedantic, but it’s pretty vital: it’s easy to spot the right answer among distractors in MCQs, just based on familiarity, but to generate the correct answer yourself with no prompts requires that you have actually internalised the information; this distinction forms the basis of the remember-know paradigm. And that’s before we get into the nitpicking of ‘recall’ versus ‘comprehension’ …
So far, early research conducted with my colleagues Andy Morley and Melanie Pitchford suggests that recognition of the correct answer based on familiarity isn’t affected, but unprompted recall gets worse under conditions of high cognitive load. So I’ll be fascinated to read what Stephen Mahar et al have found, and whether it’s consistent with our results.
As to whether custom animation might be “bad”, I’m still pretty cautious. John Sweller, the de facto king of Cognitive Load theory, is on the record (for example in Presentation Zen) as being highly critical of PowerPoint, but I’d argue that this is an oversimplification: it’s all about how we use the technology. Slideware*, when used sensibly — i.e. with an eye on cognitive load, design aesthetic, and audience involvement — can be a brilliant tool for learning; I’d love to see a study in which custom animation can be shown to actively contribute to learning, perhaps through more minimalist slide content than that used in the study by Mahar and colleagues.
John Timmer at Ars Technica rightly points out that after slideware hit the classroom, it was a long time before anyone thought to ask whether it was the right tool for the job. I don’t think that’s an unusual response (“Hey! Shiny new technology! Let’s use it … because it’s shiny!”) but I think now that we have a culture of researching instruction, the onus is on educators to demonstrate that the tools they are using are good ones, rather than just being technological magpies. I have no doubt that slideware can be a great teaching tool; it’s up to us to find ways of using it that enhance, rather than detract from, the learning experience.
(By the way, if anyone wants to send me the full article by Mahar et al., my contact details are here, and I’d be much obliged!) Thank you Olivia! Much appreciated.
* Gotta love how it’s never “death by Keynote” :)
Mahar, S. et al (?) (2009). The dark side of custom animation. International Journal of Innovation and Learning, 6, 581-592.