The Learning Scientists

View Original

Improving College Student Outcomes with Course Policies that Support Autonomy (Part 2)

Cover image by Kei from Pixabay

By Megan Sumeracki

Last week, my blog post covered a paper by Simon Cullen and Daniel Oppenheimer. Their paper is titled Choosing to Learn: The Importance of Student Autonomy in Higher Education (1). In their paper, they present two studies.

The first study, covered in Part 1, was a randomized controlled field study examining the effects of allowing students to choose whether their attendance was mandatory. Let’s review: In study one, the researchers randomly assigned sections of a college course to have one of two different course policies. Under one policy, the students’ attendance was required, and affected their grade (positively if they missed no more than 3 classes, and negatively if they missed more). Under the other policy, students got to choose. They opted in to the same attendance policy as the other sections, or they opted out and attendance wasn’t tracked nor did it affect their grade. The main takeaways were: 1) most students opted into the attendance policy (about 90%); and, 2) attendance was a bit better, and significantly more consistent throughout the semester, in the sections that made the policy optional relative to the sections that made the policy mandatory.

In the second study, the authors conducted a cohort study examining the effects of allowing students to opt out of challenging, high-effort assessments. This study is the topic of today’s post.

Study 2: Opting out of Challenging, High-Effort Assessments

The authors motivated this study, in particular, by noting that regular assessments and feedback are important for students. However, if students have assignments that are worth very little to their grade, they may not spend as much time and effort. And, forcing students to complete so many assessments can reduce autonomy and have negative effects (e.g., spending less time and effort, procrastinating, or skipping assignments). If instructors weigh assignments more heavily in the final grade, that could push students to spend more time and effort, but then the stress of these assignments could end up hurting student learning and student well-being. Thus, the authors are trying to find a way to encourage students to put forth a lot of effort without such high-stakes assessments.

The Method

The authors tested a course policy in which students were given assessment options and were allowed to choose, compared to mandatory policies. The study was conducted with college students taking an introductory philosophy course across two semesters (i.e., there were two cohorts of students*).

*Note: Students were not randomly assigned to conditions, but rather were in one condition or the other based on when they took the course (their cohort). This makes determining cause-and-effect relationships more difficult. You can see this blog post on research methods to read about these issues. The authors did look at a number of variables, such as seniority, major, etc. and found no statistical differences between the two cohorts. Still, without random assignment, even at the section level like in the first study, the authors cannot rule out alternate causes (or “confounds”). The authors rightly acknowledge this.

One cohort of students had a mandatory assignment policy. The students were required to complete 20 “argument analysis” problem sets.

The second cohort of students had a free-to-switch assignment policy. These students could choose between the problem sets and an alternative essay-based assignment. The essay assignments required less work than the problem sets, and included answering weekly reading questions and writing 5-page midterm and final essays. All students were told about the relative difficulty of the two assignment tracks, and students could switch into the lower-effort (essay) track at any point before the midterm.

When students completed assignments, they self-reported how much time they spent. The professor graded the assignments for the class. Students earned 0, 1, or 2 points on each of the problem sets. To earn at least 1 point, students needed to submit a “meaningful attempt” before the deadline.

Image by Pete Linforth from Pixabay

The Results

In the free-to-switch cohort, 90% of students started on the more challenging problem set track. So, even when they were told that one track was more challenging than the other and given a choice, the students largely selected the more challenging assessment track! Surprisingly (at least to me), only 5% switched to the essay track. Taking this together, it means 85% of students selected the more challenging track and stuck with that choice throughout the semester. That’s a win in my book!

Next, the authors looked at how long students spent on their assignments and student grades. Students in the free-to-switch cohort were more likely to report spending more time on their problem sets when they completed them, compared to the mandatory cohort. Across the semester, time spent reduced overall, but this was true for both cohorts. Further, grades on the problem sets were higher in the free-to-switch cohort than the mandatory cohort.

Side note about design and analyses: The researchers wanted to make sure they were comparing students with the same assignments, so they analyzed the problem set assignments. However, they were were also concerned that the students who opted to switch to the essays from the free-to-switch cohort could have been just weaker students, and the weaker students would not have been switched out in the mandatory cohort. They were worried this would skew the results. So, they analyzed their data excluding the lowest 15% of students in the mandatory cohort, and the pattern of results was the same with these analyses. Again, without random assignment, causality is tricky. In this case, the assessment track within the free-to-switch condition is, by definition, allowing self-selection into assessment groups. That’s the point of the manipulation! Thus, we don’t have the perfect experimental design in this paper. I still think this is interesting and the authors do a good job of trying to deal with it the best they can.

Takeaways

Even when less challenging assessments are available as an option, most students chose the more challenging track. The authors note that outside of the experiment, when used in their classes, they find the same thing when choice is given. Personally this surprised me. I would have thought more students would switch. Further, even

I find myself thinking about the students who tend to stop completing assignments, or begin “faking their way” through them as the semester goes on. I wonder if a “less challenging” option was available, if they would choose that instead, and get more out of the class assignments than when I have only one option. I am also, once again, thinking about my AP calculus teacher that I wrote about in Part 1 of this blog set. He didn’t let us choose which type of larger assessments (we had to prepare for and take exams), but he did let us choose whether we wanted to do the homework, and if so how much. He also had a “safety net” so to speak. If our grades dropped, then we were automatically switched into the homework required track.

I’m going to keep thinking about how I can infuse some autonomy into my classes this spring and see how it goes!


References

(1) Cullen, S., & Oppenheimer, D. (2024). Choosing to learn: The importance of student autonomy in higher education. Science Advances, 10(29), eado6759. https://doi.org/10.1126/sciadv.ado6759