The Learning Scientists

View Original

GUEST POST: When Theory Falls Short - Exploring Classroom-Based Learning Strategies at Scale

By: Dan Davis

Dan Davis (@NotDanDavis) is a PhD Student at TU Delft in the Netherlands in the Lambda-Lab research group. The full research paper referenced in this post was recently published & presented at the 2016 European Conference for Technology-Enhanced Learning and was awarded Best Student Paper. The presentation slides can be found here.

The learning science field has equipped both teachers and students alike with a robust toolset of effective learning strategies. But do the learning strategies we’ve come to know and love transfer between learning environments? For example, just because evidence from traditional classroom studies (1) supports the effectiveness of strategies like elaboration or interleaving, should we take for granted their direct applicability to other learning environments? 

Given their chronically dismal course completion rates and profoundly diverse student populations, Massive Open Online Courses (MOOCs) seem a great testing ground for this quandary.  With the typical course completion rates hovering around five percent, there stands ample room for improvement in boosting MOOC learner achievement. The challenge arises when we consider the nature and realities of MOOCs. Free, open, and with zero barrier to entry, it takes a seriously committed self-driven learner to succeed.

Context

Several researchers have found MOOC learners to lack the self-regulatory skills necessary to succeed and have tried various approaches (with mixed success) to supporting this self-regulated learning (SRL) behavior so integral to the self-directed learning scenarios in which many online learners find themselves (2, 3).

My colleagues and I at the Lambda Lab, housed in the Web Information Systems group at TU Delft in the Netherlands, decided to take to the learning sciences literature to tackle this problem of massive attrition. We tried to improve the completion rates in two of our MOOCs by introducing two widely accepted learning strategies: (i) retrieval practice and (ii) goal setting.

We hypothesized that if we build theory-backed, integrated support mechanisms for MOOC learners which enable retrieval practice and study planning/goal setting, we would see an increase in academic performance and course engagement.

Experimental Setup

To test this hypothesis we built and implemented our support mechanisms in two separate TU Delft MOOCs on the edX platform. We tested the effectiveness of retrieval practice in a MOOC on the topic of Functional Programming, where learners were taught a computer language called Haskell. The goal setting experiment was conducted in a course on the topic of Industrial Biotechnology, where learners explored past and present innovations in biobased production. See the table below for more information on the courses and their enrollment figures.

Retrieval Practice

The retrieval practice study was designed in three cohorts, or experimental conditions: 

  1. Control: did not receive any treatment
  2. Cued: after each week’s lecture video, this group was given a retrieval cue (see below for exact wording) followed by an open text input box to write in
  3. Given: where the Cued group received a prompt to respond in writing to, this group was provided a summary of the key points from the previous lecture. 

The Given condition served to account for the difference between activating one’s own memories & thoughts (Cued condition) versus passively receiving and reading a given summary as provided by the instructor (Given condition). The learners in the cued condition received the following prompt after each week’s lecture videos:

“Please respond in 3-5 sentences to the following question: ‘In your opinion, what are the most important points from the previous video?’”

Study Planning

This experiment in the Industrial Biotechnology course was a simple A/B setup. Half of the learners received the study planning module at the beginning and end of each week, and the other half did not.

The study planning module consisted of two parts: (i) at the beginning of each week, we prompted the learners to think about and write out their plan for the week in an open text input box:

“In the space below, please describe, in detail, your study plan and desired learning objectives for the week regarding your progress: 
e.g. – I plan to watch all of the lecture videos. 
– I will write down questions I have about the videos or assignments and discuss them in the forum.”

And (ii) at the end of each week, we asked them to think back to their initial plan and reflect on how successful they were in sticking to it and also to consider why or why not:

“How closely did you follow your study plan from the beginning of the week? Did you successfully meet all of your learning objectives? In the space below, explain how you can improve upon your study habits in the following weeks in order to meet your goals.”

Results

The intent to treat (ITT) analyses (those which include all learners involved in the experiments) yielded no significant differences between the conditions. The differences we measured for are as follows:

  1. Final Grade (from 0 - 100, where 60 is passing)
  2. Course Persistence (Final Week Reached…how far into the course they make it)
  3. Various engagement metrics (amount of time in the platform, number of sessions logged, videos watched, etc.)

While the ITT analyses came up empty, we next narrowed our focus to only those students who engaged with (clicked on) the study planning or retrieval practice modules—who we will refer to as “study planners.” We then compared the metrics listed above between study planners and highly engaged learners in the control condition.

This is where we finally observed significant differences (at a 99% confidence level): in the study planning experiment, calculated with a one-way ANOVA and a post-hoc Games-Howell test, we found that learners who clicked on at least one study planning module in the course:

  1. earned higher final grades
  2. persisted deeper into the course
  3. logged longer and more frequent sessions in the course.

Takeaways

Based on the results of the ITT analyses, we can clearly see that basing a teaching strategy or intervention solely in educational/pedagogical theory is not enough. Serious considerations need to be made in the design & implementation process to ensure that the foundational aspects that make the theory “work” are activated in the learner experience. In other words, when it comes to encouraging good learning strategies through instructional interventions, you cannot simply lead a horse to water, so to speak. Merely giving students access to a resource is not enough for it to benefit them and activate the necessary cognitive processes.

We can see this in our analyses which only consider those who interacted with the goal-setting intervention that if we can get students to engage, things tend to work out; the challenge---especially with MOOCs---is sparking that engagement.


References:

(1) Mayer, R. E. (2002). Multimedia learning. Psychology of Learning and Motivation, 41, 85-139.

(2) Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2016, April). Recommending Self-Regulated Learning Strategies Does Not Improve Performance in a MOOC. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale (pp. 101-104). ACM.

(3) Hood, N., Littlejohn, A., & Milligan, C. (2015). Context counts: How learners' contexts influence learning in a MOOC. Computers & Education, 91, 83-91.