Laptop computers have become commonplace in K–12 and college classrooms. With that, educators now face a critical decision. Should they embrace computers and put technology at the center of their instruction? Should they allow students to decide for themselves whether to use computers during class? Or should they ban screens altogether and embrace an unplugged approach?
The right way forward is unclear, especially at colleges that pride themselves on connectivity. The vast majority of students carry laptops or tablets from class to class to take notes, consult references, collaborate with professors and classmates—and to update social-media sites, order takeout, and watch YouTube videos during lectures. The personal computer is a powerful tool. It can efficiently store and enhance student work; it can also effectively transport a student’s attention away from that work.
Not surprisingly, some professors have banned computers from class. But research shows many remain conflicted about their value: in a 2014 survey by Richard Patterson and Robert Patterson of 90 professors at a liberal-arts school, 57 percent agreed that laptops enhanced learning, but 42 percent thought laptops decreased participation. Two-thirds of professors in a slightly larger survey from the same school had laptop-optional policies, and one in five required them for class.
Although students overwhelmingly like to use their devices, a growing research base finds little evidence of positive effects and plenty of indications of potential harm. To determine the impact of laptop usage on student performance, we conducted a randomized controlled trial among undergraduate students at the United States Military Academy, widely known by the name of its location in West Point, New York. In the study, we designated who was allowed to use and who was prohibited from using laptops or tablets to take notes in class.
We find that allowing any computer usage in the classroom—even with strict limitations—reduces students’ average final-exam performance by roughly one-fifth of a standard deviation. This effect is as large as the average difference in exam scores for two students whose cumulative GPAs at the start of the semester differ by 0.17 grade points on a standard 0–4.0 scale. Importantly, these results are from a highly competitive institution where student grades directly influence employment opportunities at graduation—in other words, a school where the incentives to pay attention in class are especially high.
We believe our findings raise important questions for colleges and college students about the impact of using Internet-enabled devices during class and may have implications for K–12 educators as well.
An Experiment at West Point
The United States Military Academy is a four-year undergraduate institution with an enrollment of approximately 4,400 students. West Point’s student body is unique, due primarily to the institution’s mission of generating military officers and attendant admissions requirements, including a recommendation from a local congressional representative. Students receive the equivalent of a “full-ride” scholarship; however, upon graduation, they become commissioned officers in the U.S. Army and incur an eight-year service obligation with a five-year active-duty requirement. Comparing the student population at West Point with that at other four-year institutions reveals broad similarities, aside from a major difference in the proportion of female students. At West Point, only 17 percent of students are female compared to more than 50 percent of students at other four-year schools nationwide, on average (see Figure 1).
West Point provides an ideal environment for conducting a randomized controlled classroom experiment about Internet-connected computer usage for a number of reasons. First, as part of their “core” curriculum, students are required to take several classes in sequence, resulting in high enrollment numbers. We chose to focus our study on one of these classes: Principles of Economics. Some 450 sophomores enroll in the class each semester, but individual section (or classroom) sizes are low due to an institutional commitment that caps the faculty-to-student ratio at 1:18 per class. Class sizes in our study were typically around 15 students. West Point professors also do not have teaching assistants, so all grading and interaction is done between the student and the professor. Additionally, all students are required to attend class unless they have an excused absence, so we were not concerned that attendance is affected by class-level technology policies.
Second, despite the large enrollment and small class size, student assessment in Principles of Economics is highly standardized. All classes use the same syllabus and students complete the same homework and tests. This allows us to compare grades between classes.
Third, within a given time slot, students are randomly assigned to their particular class. West Point centrally generates student academic schedules, and students cannot request a specific professor. Most importantly, prior to the first day of class, students are unaware of the computer policy of a particular class, and there is virtually no switching after the first day.
Fourth, all students at West Point are on equal footing in terms of access to educational resources: all students must purchase the same laptop computers and iPad tablets, and all academic buildings have wireless Internet access. Students also complete an introductory computer science class their freshman year prior to taking Principles of Economics.
Further, West Point uses class rankings to assign each student to a military occupation and a specific military base following graduation. A student, therefore, is especially motivated to have a high GPA so that he or she can have a better chance of receiving a preferred occupation or location.
Finally, classes are well-structured: a student who falls asleep in class, arrives late, or is otherwise disruptive may be reported to the military officer who is in charge of—among other things—disciplining the student. Cell phones are not permitted in any class, making laptops and iPads the most common Internet-connecting devices available to students. In a setting where students were less motivated or there was less discipline, we might expect any distracting aspects of technology to be even more pronounced.
Sample and Design
We conducted our experiment during the spring semester of the 2014–15 academic year and the fall semester of the 2015–16 academic year. Each term, we randomly assigned participating sections of the Principles of Economics course into one of three groups. The first group was “technology-free,” with students barred from using laptops or tablets at their desks.
The second group was intended to replicate the typical collegiate classroom environment, with students using Internet-enabled technology at will during lecture and discussion. In those classrooms, students were permitted to use laptops and tablets in the class. Ideally, students would use them for note-taking or referencing material, such as the “e-text” version of the textbook, although professors had limited ability to monitor every student’s computer. Professors did have discretion to stop a student from using a computing device if the student was blatantly distracted from the class discussion.
The third group allowed technology, with restrictions. This “tablet-only” group was designed to replicate the intended use of Internet-enabled technology as a non-distracting resource during class. In those sections, laptops were not permitted, but students could use iPad tablet computers so long as they remained flat, with the screen facing up and parallel to the desk surface. This modified tablet usage enabled students to take notes on the tablet or access their e-text or other class materials while allowing professors to observe and address student use of distracting applications. We cannot, however, be sure that students only used their tablets for class-specific purposes. For example, it is possible that instructors did not observe their students using iPad applications such as iMessage or other communication tools or games. Roughly 80 percent of students in classrooms that permitted laptops and tablets without restriction used an Internet-connected device during class, but only 40 percent of students in “tablet-only” classrooms used a device.
We randomly assigned sections to one of the three groups in a way that ensured each professor taught at least one section in the technology-free group and at least one section in either of the other groups. We limited our sample to students who took the class as sophomores and excluded students enrolled in classrooms of professors who chose not to participate in the experiment. Our final sample consisted of 50 classrooms and 726 students over the two terms.
Our primary outcome is student performance on the mandatory, high-stakes final exam for the course. This exam consisted of multiple-choice, short-answer, and essay questions. Students had 210 minutes to complete it. All were required to use a computer to complete the exam, and the software program automatically graded the multiple-choice and short-answer questions. Then, with those results in hand, professors manually scored all essay responses. All but 15 students in our sample sat for the final exam, which was worth 25 percent of a student’s final grade. Professors warned their students at the beginning of the semester that a failing grade on the final exam could constitute grounds for failing the entire course, regardless of marks earned on other assignments.
We focus our analysis below on the automatically graded multiple-choice and short-answer questions, which accounted for roughly 85 percent of the full exam grade. We excluded essay scores from our analysis because we found that some professors tended to grade essay questions in a manner that ensured students on the margin received a passing score on the exam.
Results
Overall, students in our sample did relatively well on the final exam, but those who were prohibited from using Internet-connected devices during class did best. The average score, looking at students’ multiple-choice and short-answer scores, was roughly 71.7 percent, with a standard deviation of 9.2 percentage points. Students in classrooms without Internet-connected devices earned the highest average score of 72.9 percent. Students in classrooms where laptop and tablet usage was not restricted earned the lowest scores, on average, at 70.5 percent, a difference of 2.4 percentage points. Students in classrooms where only tablets were allowed under strict conditions did slightly better, with an average score of 71.4 percent, but they still had lower scores than students in the technology-free group.
Our best evidence of the effects of laptop policy comes from a separate analysis that compares the exam scores of students assigned to the unrestricted-use and tablet-only classrooms to those of students in classes where laptops were banned, while adjusting for the minor differences in the backgrounds of students across groups and including controls for the instructor, the class hour, and the semester. Instructor controls are important, as we want to eliminate any differences from instructors who are better or worse at delivering the material. Class-hour controls account for whether students perform differently at different hours of the day, such as before or after lunch. Semester controls ensure that differences are not driven by slight variations in the course between the two semesters.
Our analysis indicates that unrestricted laptop use reduced students’ exam scores by 0.18 standard deviations relative to students for whom laptops were prohibited (see Figure 2). Perhaps surprisingly, the effect in the tablet-only classrooms was similar, at 0.17 standard deviations. Although both of those negative effects are statistically significant, our study was not large enough to determine whether the true effect of modified tablet use was more or less negative than the effect of unrestricted laptop use.
To put these findings in perspective, we compare the effect of prohibiting computers to the association between GPA at baseline and final-exam success. Banning computers gives students a leg up, grade-wise: we find that a student in a classroom that prohibits computers is on equal footing with a peer who is in a class that allows computers and whose GPA is one-third of a standard deviation higher—nearly the difference between a B+ and an A- average, for example.
In addition to analyzing the sample as a whole, we also looked separately at subgroups of students defined based on gender, race, scores on college-entrance exams, and entering GPA (see Figure 3). In no group did students appear to significantly benefit from access to computers in the classroom. We did find some suggestive evidence that permitting computers is more detrimental to male students than to female students and to students with relatively high entrance-exam scores. Future research is needed to verify the robustness of these differences, as they are based on smaller numbers of students and may have occurred by chance.
Implications
To be sure, Internet-connected computers may enhance the learning environment in some cases, and a 2006 study by Miri Barak, Alberta Lipson, and Steven Lerman suggests that students enjoy having computers in the classroom. In a traditional classroom, where computers and tablets are used only to take notes, the benefits may include the ability to take notes faster and carry notes at all times. However, a 2014 study by Pam Mueller and Daniel Oppenheimer found that students taking notes on laptops perform worse on conceptual questions than students required to use pen and paper. One theory is that the ability to record content quickly led the students to engage in transcription rather than the identification of a lecture’s most important points.
Outside of the classroom, increased connectivity on college campuses provides opportunities for students and teachers to collaborate, empowers student research via university library–enabled online search engines, and allows students to use enhanced electronic textbooks, which include embedded videos and hyperlinks to pertinent articles on the Internet. Evidence is mounting, however, that potential distractions from websurfing, e-mail, and electronic chatting with friends can hinder student learning. In a 2010 study, James Kraushaar and David Novak found that students using laptops in class had non-course-related software open and active 42 percent of the time, and a 2008 study by Carrie Fried found that students report increased multitasking when laptops are in the classroom. Multiple laboratory-style studies demonstrate the negative effects of laptop multitasking on test performance, including a 2013 study by Faria Sana, Tina Weston, and Nicholas J. Cepeda that found that test-score performance suffered not only if a student used a laptop during class, but also if he or she merely sat near a computer user.
In K–12 schools, where students do not typically take lecture notes, a growing body of research has found no positive impact of expanded computer or Internet access. For example, a 2002 study by Joshua Angrist and Victor Lavy found that installing computers throughout elementary and middle schools in Israel had no effect on student achievement, even though their teachers used more computer-aided instruction. Another study, published in 2006 by Austan Goolsbee and Jonathan Guryan, found that the federal E-Rate program expanded California students’ Internet access by 66 percent over four years but did not have an impact on student achievement (see “World Wide Wonder?” research, Winter 2006). Other studies have found no link between enhanced student outcomes and expanded information-technology spending, universal-laptop programs, and providing students with home computers.
Our study builds on this prior research by using random-assignment methods and measuring the cumulative effects of Internet-enabled classroom technology over the course of a semester rather than measuring immediate or shorter-term effects. Further, as is the concern with most lab experiments, participants may perform tasks differently or behave abnormally when being forced to use computers in a lab setting. Our study was performed in a high-stakes environment where students have the choice to use technology.
Our findings are consistent with those of a recent study by Richard Patterson and Robert Patterson, which found that in-class computer usage reduces academic performance by between 0.14 and 0.37 points on a four-point grade scale among undergraduate students at a private liberal-arts college. These effects were concentrated among male and low-performing students and in quantitative courses. That study differs from ours because it compares students who use computers to students who do not use computers within the same classroom. In contrast, our study directly measures the effect of a common classroom policy decision (that is, to allow computers or not) by comparing classrooms that permit computers to classrooms that prohibit computers. Within our study, only about 60 percent of students assigned to classrooms that permitted some form of technology actually used a laptop or an iPad. Thus, one potential reason our estimates are smaller in magnitude is that the harmful effects of computers in the classroom could be more pronounced among students who use computers than among students who choose not to use computers. Alternatively, considering the small classroom size and strict environment at West Point, the negative effects of technology could be larger in more standard college settings.
As stated above, we do not claim that all computer use in the classroom is harmful. Exercises where computers or tablets are deliberately used may, in fact, improve student performance. Rather, our results relate to classes where using computers or tablets for note-taking is optional. Further, it was beyond the scope of our study to identify how computer and tablet access lowered test scores. Was it because students’ note-taking was worse? Were students distracted by e-mail, social media, or other websites? Did instructors teach differently when students were on their computers? As computers in the classroom become more prevalent, research focusing on these areas is clearly necessary.
In the meantime, as we head into a new school year, educators at all levels may want to think twice before allowing students to open their laptops.
Susan Payne Carter is an assistant professor of economics at the United States Military Academy. Major Kyle Greenberg is a research analyst at the Army’s Human Resources Command. Major Michael S. Walker is a research analyst at the Office of Cost Assessment and Program Evaluation within the Office of the Secretary of Defense. A more detailed account of this investigation can be found in the February 2017 issue of the Economics of Education Review. The views expressed herein are those of the authors and do not reflect the position of the United States Military Academy, the Department of the Army, or the Department of Defense.
This article appeared in the Fall 2017 issue of Education Next. Suggested citation format:
Carter, S.P., Greenberg, K., and Walker, M.S. (2017). Should Professors Ban Laptops? How classroom computer use affects student learning. Education Next, 17(4), 68-74.