MOOCs as a Massive Research Laboratory

Martinez, Ignacio, Economics - Graduate School of Arts and Sciences, University of Virginia
Turner, Sarah, Economics, University of Virginia

Massive Open Online Courses (MOOCs) offer students a new way to learn, and researchers a new laboratory for studying learning.
Education researchers have struggled to understand how students learn and what helps them achieve more.
The ``Big Data'' environment enabled by the technology used to deliver MOOCs provides an unprecedented opportunity to get inside the ``black box'' of student learning.
On the other hand, the sheer scale of MOOCs combined with the extraordinary dimensionality of the process and outcome data also imply substantial challenges for monitoring outcomes.

In the first chapter, co-authored with Paul Diver, I explore the opportunities and challenges that MOOCs are generating for research.
A wide variety of topics related to pedagogical methods and student incentives lend themselves to research using MOOCs; throughout the chapter, I discuss lessons that can be gained both from observational comparisons and especially from the opportunity to run experiments on randomly chosen groups of students.
I start by discussing dropout rates and study how students who decide to drop out are different from those who continue in the course.
I then discuss class forums and video lectures and how interaction with this material is correlated with achievement.
After that, I explore the strong correlation between procrastination and achievement and the implications for course design.
I also examine the role of certification offered by MOOCs and how certification options can affect choices and outcomes.
Finally, I examine the potential of linking data across courses and the opportunities and challenges of working with data that originates in surveys of MOOC participants.
All of these research opportunities offer Big Data challenges as well which have to be addressed with parallel computing.

In the next three chapters, I present results from randomized experiments that I run in the big-data environment offered by MOOCs.
Both experiments take the form of ``nudges", emails that simply provide certain information that may suggest changes in behavior to students. In the second chapter, I evaluate the impact of providing students with information about their performance relative to their classmates.
I run a randomized experiment in the context of a Coursera MOOC, assigning students to either one of two potential treatments.
The first, framed positively, describes what fraction of his classmates a student outperformed.
The second, framed negatively, describes what fraction of his classmates a student underperformed.
I find evidence that students respond to this informational nudge and that framing matters.
Students who were doing relatively poorly respond to the negative treatment with more effort, and this effort translates, in some cases, into higher achievement. On the other hand, students who were doing relatively well respond to the positive treatment. As an example, the average student in the control group, among those who did not have a perfect score on the first quiz before the intervention, was ranked in the 31.6 percentile of the class on the third quiz, while the average student after receiving the negatively framed treatment was in the 40.5 percentile.

The third chapter identifies the causal effect of procrastination on achievement in a MOOC.
I use two approaches, instrumental variables (IV) and a randomized control trial.
I show that rain and snow affect when a student takes a quiz, and, therefore, can be use as an IV.
I find that taking the course first quiz on the day it is published, rather than procrastinating, increases the probability of course completion by 15.4 percentage points.
For the randomized control trial, I send an email (directive nudge) encouraging a randomly selected group of students to procrastinate less.
I find that the effects are heterogeneous across countries, suggesting that it may be advisable to customize nudges to country characteristics.
As an example of the magnitude of the effects, Germans assigned to the treatment group were 167% more likely to obtain the course certificate while there is no effect for Americans.
This shows that very low-cost intervention can increase student achievement.
This online experiment may provide valuable lessons for traditional classrooms.

The last chapter, co-authored with Louis Bloomfield and Sarah Turner, shows that a MOOC can serve as a complement to a brick-and-mortar introductory physics course.
I randomly assigned two thirds of the students to receive a small monetary incentives to enroll in a MOOC.
Half of the treatment group received a $10 Amazon gift card simply for enrolling in the MOOC and responding to our email.
The other half received a $50 Amazon gift card if they responded to the email and obtained an 80% final score in the MOOC.
Using these monetary incentives as instruments for enrolling in the MOOC, I show that MOOC enrollment significantly improves performance in the brick-and-mortar classroom.

Taken as a whole, these essays describe how we can use MOOCs to learn more about the student achievement production function.
Additionally I show that very low-cost interventions can nudge students into changing their behavior and improve their achievement.
As technology advances, MOOCs providers collect more data on how their users interact with their platforms.
For example, better data on time use.
Additionally, technology will soon allow for more complex interventions than the simple ones in these essays.
This will ultimately allow us to determine optimal course design, and learn more about personalized learning.

PHD (Doctor of Philosophy)
MOOCs, nudge, procrastination, achievement, effort, brick-and-mortar
All rights reserved (no additional license for public reuse)
Issued Date: