Does Practice Make Perfect? Multiple Homework Attempts and Student Learning: A Quantitative Study

Concurrent Session 6

Session Materials

Brief Abstract

Does practice make perfect? Or lead to grade inflation? The debate over multiple homework attempts continues even as the increased use of homework management systems intensifies the need for data-based answers. This quantitative study of 2000 online learners found learning does increase with multiple homework attempts. Practice does make perfect. 


Kathy Archer, Assistant Professor of Economics at Grand Canyon University holds of Doctor of Business Administration (DBA). Dr. Archer teaches entry level economics classes in both the online and traditional modalities. Additionally, she has chaired four dissertation committees. Her personal research interests include teaching economics, particularly in the online environment, and wage inequality.

Extended Abstract

It’s a fact of learning and a fact of life: Practice makes perfect. Feedback from each repetition informs the next attempt until mastery is achieved.  This common-sense approach stops short, however, when it comes to student homework where the debate about whether to allow multiple attempts continues. Proponents claim the value of practice and learning by reworking (Palocsay & Stevens, 2008; Titard et al., 2014). Opponents cite grade inflation, student guessing behaviors and superficial learning rather than true mastery (Rhodes & Sarbaum, 2015; Fish, 2015). Meanwhile the increased use of online homework management systems that easily allow for multiple attempts intensifies the need for a data-based answer to the question.

Previous studies are far from unanimous in their support of multiple attempts and web-based homework management systems in general. Supporters contend the flexibility of extensive practice and immediate feedback lead to improved student performance (Arora, Rho, & Masson, 2013) as well as increased student enthusiasm and motivation (Halcrow & Dunnigan, 2012). Opposing studies found multiple attempts improve homework grades, but not exam grades, leading to grade inflation without increased learning (Rhodes & Sarbaum, 2015a). The question is further complicated by the increase in adult learners and online education.  Would the effect of multiple homework attempts be the same for adult learners in a fully online environment as found in previous studies of traditional students?

The research question that guided this study was: “Is there a significant relationship between allowing multiple homework attempts and improved student learning as measured by exam scores for adult learners in a fully online environment?”

This exploratory analysis was based on a natural experiment in which a sample of 2016 online students in an entry level university economics course was divided into two groups to look at the relationship between multiple homework attempts and exam scores. Changes in the curriculum dictated by the university over a 2-year period dictated the makeup of the groups.

Group A completed homework assignments on simple Microsoft Excel templates. The templates were structured so that the cell changed color when the correct answer was entered, so students had the opportunity to change their answers. But the assignments were not graded and no feedback was offered until the homework was submitted.  Students were allowed only one submitted attempt for each assignment. 

Group B used an online homework management system which allowed up to three attempts on each homework question.  The system provided detailed feedback after each attempt, then presented the student with another similar problem. Although the problems were similar, numbers and other details were changed to allow the student a fresh problem with each attempt. The system recorded only the highest score of all attempts, so there was no penalty for multiple attempts.   

The course that was the focus of this study is a fully online introductory economics course with adult learners ranging in age from 16 to more than 70 years old.  The class, which covers both microeconomics and macroeconomics topics, is 7 weeks long. The sample of 2016 students used for this study included every student enrolled in the course from January 2015 to December 2016.

The methodology for this study relied upon analysis of variance and regression analysis to develop inferential statistics to allow comparison of results among the two groups. The process began with a comparison of means for exam scores and homework scores based on descriptive statistics alone. This was followed by ANOVA analysis of variance to confirm that the observed difference in means was statistically significant.  Since the sample size between the two comparison groups was unequal, a t-Test was also performed to confirm that the two groups were significantly different. Finally, regression analysis was used to better define the relationship between homework scores and exam scores.

Concerns of grade inflation from previous research were quickly put to rest in findings from the current study. Mean homework scores increased by only 1% with the introduction of the homework management system with multiple attempts, and median scores decreased by 3.91%. Neither of these changes was found to be statistically significant. 

Changes in exam scores were strikingly different. Mean exam scores increased by 7.93% and median scores increased by 9.52% with the introduction of the homework management system with multiple attempts. Both results were statistically significant. Furthermore, regression analysis found that 74.52% of variance in exam scores was explained by variance in homework scores among Group B, students using the web-based homework management system that allowed multiple attempts. For Group A, students who were allowed only one homework submission, variance in homework scores explained only 62.34% of variance in exam scores.

The results of this study clearly indicate that student learning, as measured by exam performance, increases with multiple homework attempts administered through a web-based homework management system. This contradicts findings from some previous studies that found web-based homework management systems to be associated with guessing behaviors, grade inflation, and superficial learning leading to poor exam performance (Bowman et al., 2014; Fatemi, Marquis, & Wasan, 2015; Rhodes & Sarbaum, 2015a). Neither guessing behaviors nor grade inflation was seen with the introduction of multiple homework attempts through a web-based homework management system in this study, as evidenced by the fact that there was no significant change in homework. Multiple homework attempts administered through a web-based homework management system in this study were associated with increased student learning as measured by exam scores and a stronger relationship between homework scores and exam scores. This difference in findings may be explained by the fact the web-based homework management system in this study presented a different version of the problem with each attempt, reducing the opportunity for guessing.  It may be further explained by the adult population from which this study sample was drawn, who may be less inclined to guessing behaviors.

Opportunities for further study in this area are many.  Of particular interest are studies that look at student demographics, studies that look at time spent on homework, and studies that explore how these factors affect low performing students compared to high performing students.