Steven Volk (October 5, 2014)
Once again the issue of laptops in the classroom has nosed its way onto my radar screen. I’ve presented materials before to help faculty think about developing a policy for laptop use in the classroom [e.g., the “Articles of the Week” on Oct. 28, 2013 (“Paper or Screen”), which offers research suggesting that people often understand and remember text on paper better than on a screen, and that screens may inhibit comprehension by preventing people from intuitively navigating and mentally mapping long texts; or from Oct. 15, 2013: (“Use of Laptops in the Classroom”), which highlights some general research on the best practices of laptop use in the classroom.
I’ve also referenced the research by Pam Mueller and Daniel Oppenheimer which appeared in Psychological Science (April 23, 2014) on note taking on a laptop vs. by hand, suggesting the gains to learning that occur when students take notes by hand, a procedure that requires more processing, are more significant than (essentially) taking dictation on the computer.
Now (thanks to a note from Jeff Witmer) I was led to a new (September 9, 2014) entry on the topic by Clay Shirky titled, “Why I Just Asked My Students to Put Their Laptops Away.” If you don’t know Shirky, he’s a Jedi warrior for the use of technology who teaches interactive telecommunications at NYU and a dynamic proponent of crowdsourcing collaborations. His 2008 book (Penguin), Here Comes Everybody: The Power of Organizing Without Organizations discusses the impact of the internet on group dynamics. (Here is the first chapter.) Anyway, when Shirky says “lids down” to his class, it has a different resonance than, say, if Mr. Chips recommended it.
Not surprisingly, a lot of research informed his decision. He references the work on multi-tasking, which uniformly suggests how harmful it is to the quality of cognitive work, and, in particular, how detrimental it is for those engaged in college-level work. He discusses the research that concludes that when we multi-task, rather than doing more in a specific time frame, we actually do less. But we continue to multi-task even in the face of declining efficiency because of the emotional gratification provided by the “other” tasks we’re “taking care of.”
He also shines a light on what we have known for a long time but have tended to ignore, something that software developers and social media entrepreneurs have used to build their networks into the hundreds of millions of users. Simply put: It’s much more compelling to find out from a “friend” what he thought of the party you both went to than to understand the G protein couple signal transduction pathway. Guess which one is going to win the student’s attention? As Shirky argues, getting a visual alert that you have just received a message on Facebook is really (“actually, biologically”) impossible to resist. “Our visual and emotional systems are faster and more powerful than our intellect,” he notes. For this reason, Shirky argues, he’s “stopped thinking of students as people who simply make choices about whether to pay attention, and started thinking of them as people trying to pay attention but having to compete with various influences, the largest of which is their own propensity towards involuntary and emotional reaction.”
The final piece of evidence that pushed him into the “lids-down” mode was the research published in Computers and Education in 2013 by Sana, Weston and Cedepa. They found not only that “participants who multi-tasked on a laptop during a lecture scored lower on a test compared to those who did not multitask,” but that “participants who were in direct view of a multitasking peer scored lower on a test compared to those who were not” (my emphasis). This “second-hand smoke” argument (“nearby-peers”) suggested to him that adopting a laissez-faire approach to laptop use in class was pretty much the same as saying that you can choose to smoke in class if you want since it’s only the smoker who is harmed by that action..
Not surprisingly, there has been some significant internet push back against Shirky’s argument, often of the personally-offended variety that greets a strict vegan who has just recommended that his friends tuck into a 16-oz T-bone. While the arguments vary (look at the great things that you can find on the internet; the problem is in teachers who lecture too much, not in laptops; we have always had distractions of different kinds in the class, why is this different; technology makes it possible to do things, not necessary to do them), they all argue for the positive benefits of laptop use in student learning. For my own part, I’ve long been a believer that if you engage students sufficiently and have them continually moving around the class in smaller group discussions, you can overcome the multi-tasking problem that comes with laptop use. But I’ve also always admitted that this is not really an approach that those who teach in larger lecture-style classes, particularly in those classrooms with fixed, amphitheater-style seating (a whole other issue – don’t get me started!) can take.
In the end, though, two elements of Shirky’s argument have led me to rethink my own laissez-faire approach: (1) programmers and software producers spend millions, if not billions, developing programs that are specifically designed to capture your eyes and to keep your attention focused on what they have to offer. How easy is it to compete with that when what we offer is an invitation to our students to crack their brains open thinking about Kant or game theory? (2) I can no longer ignore the peer-effect literature, which is compelling.
So, give it some thought – I’d be interested to hear responses from colleagues on either side of the argument (or in the middle).