Test Prep Courses: A Track Record of Mixed Results
For many parents looking to give their students the tools to succeed on the SAT or ACT, test prep courses feel like the obvious starting point. The SAT and ACT assess students’ understanding of academic concepts and reward familiarity with common test-taking strategies and so-called “tips and tricks.” Test prep courses are designed to teach exactly these skills. By this logic, they should be effective vehicles for improving scores. In practice, however, this intuition has mostly failed.
Decades of peer-reviewed research and studies from non-profit organizations consistently find that SAT and ACT test-prep courses produce only small average score increases. Reported effects typically fall in the range of approximately 20–30 points on the SAT and about 1 point on the ACT, gains that are modest relative to the total score scales and often similar to improvements observed from simple retesting or increased familiarity with the exams. As a result, the overall empirical conclusion is that test-prep courses have limited impact on average test performance and do not meaningfully change outcomes for the typical student.
While the average effects of test prep courses are unremarkable, especially once one accounts for the modest gains associated with simple retesting, this does not mean that test prep should be dismissed outright. A closer inspection of the data reveals that score improvements, though weak overall, are driven disproportionately by a small subset of students.
The underlying score-change data are not symmetrically distributed. Studies report that score gains from test-prep courses exhibit positive skew, with median improvements smaller than mean gains. This indicates that most students experience little to no score increase, while a relatively small subset achieves substantially larger improvements, pulling the average upward. As a result, the commonly cited mean score gains overstate the typical student experience and mask considerable variation in individual outcomes. Understanding this pattern matters not just academically, but practically, for how test prep should be designed.
To begin, we ask who are the students responsible for the observed test prep effect? At Ivy Tutor, we call them the grinders. Grinders are not merely motivated students. Many students complete optional assignments and take practice tests. Grinders go further. They recognize that effort is wasted unless it results in genuine learning and conceptual understanding. For a glimpse of grinders in action, one need only visit the SAT or ACT subreddits, where students regularly post challenging problems and solicit detailed feedback.
What about the remaining students who might not have fully-realized self-teaching skills? These are the students that typically require a tutor to implement the structure that grinders are able to self-impose. Should the non-grinders still take test prep courses? At Ivy Tutor, the answer is yes, but only as part of a broader plan. Test prep courses are relatively inexpensive compared to one-on-one tutoring and provide a useful foundation. They introduce students to the core concepts, common strategies, and overall structure of the exam. While they rarely teach the finer distinctions that separate high scores from very high scores, they ensure that this basic material does not need to be revisited during individualized tutoring.
Designing Test Prep in Light of the Data
For Ivy Tutor, this understanding directly shaped how our test prep courses were designed. Our one-on-one tutoring students, who typically engage in sustained, individualized instruction over an extended period, improve their SAT scores by an average of roughly 200 points relative to their initial diagnostic baseline. We do not attribute these gains to test-taking strategies alone, nor do we view them as representative of what short-term or group-based instruction can reliably achieve. Rather, they reflect the cumulative effect of long-term engagement with underlying academic concepts, deliberate practice, and continuous feedback. The goal of our course design is therefore not to replicate these tutoring outcomes at scale, but to incorporate the most transferable elements of individualized instruction into a group setting.
One such element is sustained student engagement. Test prep content is often dry, and students struggle to remain attentive when material feels abstract or disconnected from their own experience. In one-on-one tutoring, personalization happens naturally. A tutor can say, “You missed this question, but students who score 650 on this section understand this specific concept,” directly tying the lesson to the student’s goals. That degree of individualization is not possible in a classroom setting, so our courses rely on different mechanisms to create relevance. One particularly effective approach is to imply personal stakes through imaginative scenarios. For example, instead of introducing a trigonometric function in isolation, an instructor might say, “If you don’t understand this function, you’re not surviving the zombie apocalypse,” and then walk students through an intentionally absurd scenario in which the concept becomes necessary for escape or survival. The literal practicality of the example is beside the point. What matters is that the student’s imagination is engaged, the concept acquires narrative weight, and the material no longer feels abstract. By implying “real-world” relevance, even playfully, students are more likely to internalize concepts in ways that closely resemble the learning that occurs during immersive one-on-one tutoring.
As Ivy Tutor continues to develop and study its test prep course model, our aim is to refine a course that can stand on its own rather than merely serve as a gateway to one-on-one tutoring. We do not expect such a course to replicate the roughly 200-point gains observed among our tutoring students, and we view that limitation as both realistic and acceptable. For many students, dramatic score increases are neither necessary nor aligned with their goals. Instead, a well-designed course can provide meaningful improvements, conceptual clarity, and strategic confidence, allowing students to reach outcomes that are appropriate for their aspirations. In this sense, the value of test prep is not in maximizing scores at all costs, but in delivering the right level of support to the right students at the right time.
Bibliography
ACT Inc. (2018). What we know about ACT test preparation. ACT Research Brief.
ACT Inc. (2019). Effectiveness of ACT Online Prep: A randomized controlled trial. ACT Research.
Becker, B. J. (1990). Coaching for the SAT: A review of the methodological issues. Review of Educational Research, 60(3), 373–417.
Briggs, D. C. (2001). The effect of admissions test preparation: Evidence from NELS:88. Chance, 14(1), 10–18.
Briggs, D. C. (2009). Preparation for college admission exams. In R. J. Crisp (Ed.), Handbook of college admission counseling (pp. 201–220). National Association for College Admission Counseling.
Briggs, D. C. (2009). Test preparation effects in a national sample: Differences in gain scores for math and reading. Journal of Educational Measurement, 46(2), 131–147.
College Board. (2017). Official SAT practice on Khan Academy: Effectiveness study. College Board Research.
College Board. (2020). Associations between official SAT practice and SAT score gains. College Board Research.
DerSimonian, R., & Laird, N. (1983). Evaluating the effect of coaching on SAT scores. Harvard Educational Review, 53(1), 1–18.
Messick, S., & Jungeblut, A. (1981). Time and method in coaching for the SAT. Educational Testing Service.
National Association for College Admission Counseling. (2009). Report of the commission on the use of standardized tests in undergraduate admission. NACAC.
Powers, D. E., & Rock, D. A. (1999). Effects of coaching on SAT I scores. Journal of Educational Measurement, 36(2), 93–118.


