20 November 2008

resrch dzine: ur doin it rong

if only they had LOLPublicIntellectuals. peter salins, professor of political science at stony brook university, had a pro-SAT op-ed in the times yesterday that gave me a bad case of the ?!??!?!?!?!!!s.

salins's piece refers to an analysis of graduation rates at SUNY schools, before and after some of them decided to increase emphasis on SAT scores in admissions decisions. guess what! the schools that raised their emphasis on SAT scores, while presumably (?) leaving other aspects of the admissions process unchanged, had higher average graduation rates than the schools that left their SAT emphases alone. he closes by noting that it's "fashionable" to be negative about the SAT, but that people who wish to make judgments based on "good empirical evidence" should be a little nicer to our friends at ETS.

somebody call don green! salins has decided that the SUNY experience is "a controlled experiment of sorts" and that the evidence is "fairly conclusive[]." i think this is a convenience sample, and that the evidence is suggestive at best.

there are also a huge number of problems with uncritically endorsing the (racist, highly-responsive-to-expensive-prep-courses) SAT that i won't go into here. salins doesn't go into them either, which in the real world is certainly the worst of his problems -- but it's the research design claims that really get my goat today.

salins defines the research question this way: "do SATs predict graduation rates more accurately than high school grade-point averages?"

with the sort of keen accuracy that would surely have received a perfectly adequate score on a high school statistics test, he notes: "If we look merely at studies that statistically correlate SAT scores and high school grades with graduation rates, we find that, indeed, the two standards are roughly equivalent...However, since students with high SAT scores tend to have better high school grade-point averages, this data doesn’t tell us which of the indicators — independent of the other — is a better predictor of college success. Instead, we need to look at the two factors separately." true enough. so we ask:

colleagues! (what is it?) colleagues! (what is it?)

what is the usual way to "look at the two factors separately"?

well, given a big pool of convenience data referring to your chosen unit of observation (probably the college class), you'd gather a bunch of independent variables (pertinent demographics, popular programs of study, average SAT scores, average incoming GPA, so on, so forth, ad infinitem) and the relevant dependent variable, the average graduation rate. and then you do a big multiple regression. (with an instrumental variable, if there's one handy.) [also blah blah structural equation modeling blah blah blah.]

OR, if you had tons and tons of resources, you would secretly [SECRETLY!] change the weighting scheme for some incoming classes [CLASSES NOT SCHOOLS!] and not for others. you would make your secret change without altering anything else about the college, the admissions process or the admissions pool. and because are trying to compare the impact of incoming average GPA with the impact of incoming average SAT, you have at least three groups: two treatments (increase emphasis on SAT and increase emphasis on grades) and one control (change nothing).

that sort of design would be a "controlled experiment of sorts."

it is not a "controlled experiment of sorts" when nine of sixteen schools, using mechanisms that logic suggests could not have been random, change their SAT emphases and those schools then observe a better average six-year graduation rate than the seven schools that didn't. like i said, it's suggestive. but it's not a controlled experiment because it doesn't control. what else changed between 1997 and 2001 at these nine schools? increasing emphasis on SAT scores certainly doesn't test SAT scores against GPA. and, to use the methodological back door into the race/class issues that i said i wouldn't get into, it also doesn't tell us what sorts of applicants are discouraged from applying when they know the SAT is weighted more heavily.

here's an alternative hypothesis: any mechanism at all that decreases the number of poor (black, latino, whatev) kids in college is also going to decrease attrition.