<lass="page layoutArea" title="Page 1">
The Carletonian is a veteran watchdog on the issue of test-optionality at Carleton. In 2014, Emma Nicosia interviewed Vice President and Dean of Admissions and Financial Aid Paul Thiboutot on the matter. His rationale for requiring testing: “He explained that, as a stand-alone measure, a student’s SAT scores (or ACT) are not accurate in indicating a student’s college readiness. But in conjunction with other things such as GPA, class rank, and an internal rating system, scores become a useful tool.”
Here’s Paul in a 2016 interview with Justine Seligson: “I appreciate what standardized tests provide as an additional element in the evaluation of students… Not as a cut-off, not as a determinant, not as an absolute, but as an added factor.”
In 2017, an Admissions and Financial Aid Committee vote brought recent discussions of test-optionality to an end. Lizzy Ehren and Dylan Larson-Harsch interviewed then-AFAC-chair David Lefkowitz for his rationale: “He explained that the admissions office considers test scores as only one of a myriad of factors,” they wrote.
This is a compelling explanation. Carleton’s holistic review process allows us to read test scores in the context of everything we know of an applicant. Sure, scores correlate strongly with both wealth and racial demographics, but readers adjust their outlooks on test scores based on an applicant’s location, high school prestige, racial demographic and family background in higher education. This means that a high score can reveal promise in a candidate from a new high school in Japan, or in a candidate from a graduating class of six students in Perley, MN.
Perhaps the best part of an explanation like this: we can support these intuitions with numbers. Take a moment and consider how you might design a study to measure the value that a test score adds to an otherwise complete college application.
Researchers from a variety of backgrounds, from statistics to consulting to admissions and from small colleges to flagship universities, have attacked this problem similarly. First, they ignore test scores and find the correlation between an application’s “quality,” as measured in some holistic manner, and the student’s future college GPA. Next, they add test scores to that measure of “quality” and see how much this correlation improves.
In one study at Ithaca College, a team of admissions officers reviewed five hundred past applications and created a rating system to quantify the strength of one’s high school background based on a variety of measures, including the difficulty of their courses and their grades therein. How much did their correlation improve with test scores in the picture? Between one and two percentage points.
Could this be a fluke, a result of poor experimental design or just sheer luck? No—Ithaca’s findings match those from similarly-minded studies at the University of Georgia, DePaul, Hopkins and Middlebury, to name a few.
One might argue that the College Board just redesigned the SAT. But the last redesign of the SAT made no improvements on its predictive power, and these new changes make the test further resemble the ACT, which was also implicated in the research discussed above.
Where does that leave us regarding our original explanation—the go-to story that standardized testing is a valuable “added factor,” a “useful tool” when used “in conjunction with other things”? A strong body of literature argues just the opposite. To an otherwise-complete application, test scores add little to nothing, even with all of the test-score-contextualizing demographic information that our admissions office extols.
These internal validity studies are costly and time-intensive, so Carleton hasn’t run its own. And why should it, with all these results pointing in the same direction? James Fergerson, Carleton’s Director of Institutional Research and Assessment for eight years, insists that there’s no reason to believe that Carleton would get different results. It’s unrealistic and frankly egotistical to think that our admissions staff can read these scores significantly better than those at our peer institutions, especially those institutions with the care to research their own reading practices.
This brings us to test-optionality, the policy of making SAT and ACT scores an optional component of applying to Carleton. The gains here are tangible. When a college goes test-optional, it experiences a permanent, yearly influx of applications in the several-hundred range. Students who don’t submit test scores disproportionally identify as first-generation, all categories of minority students, Pell Grant recipients, and students with learning differences. And when they arrive at college, non-submitters perform almost identically (within 0.05 GPA) to their test-submitting peers.
In addition to these applicants, test-optionality inflates a college’s reported test scores, which in turn bolster national rankings. These aren’t ends of themselves, but rather means to improve the academic strength of our college. When potential applicants see better numbers, they self-select to create stronger incoming classes, which in turn create better numbers, which in turn create stronger classes.
At first glance, Carleton’s defense of standardized testing is compelling. But the things Thiboutot and Lefkowitz did not mention in their interviews with students were the decades of predictive validity research that question their faith in testing and the improvements in size and caliber of applicant pools that test-optionality could return within just a few years. In higher education, a culture known for administrations afraid of change, Carleton should consider the opportunity costs of fulfilling that stereotype.