The Discriminatory Message Behind the SAT’s New “Adversity Score”

Translation: This exam wasn’t made for you.

Following the college admissions scandal (which, somehow, came as a surprise to a good number of people), the College Board decided to take a look at the SAT exam to see how they could make the college admissions process more equitable.

On the bright side, they recognized that the SAT is not assessing what it should be assessing. They also recognized that certain groups of students earned higher scores than others, and that these scores could be attributed to a bunch of different forms of privilege — financial privilege included.

But instead of taking this revelation and doing something productive with it, the College Board decided to leave the exam alone and add an adversity score to students’ exam reports.

This “adversity score” ranges from 1–100, with 50 representing “average” levels of adversity, and includes the following measurements:

  • neighborhood crime rate
  • neighborhood poverty rate
  • neighborhood housing values
  • neighborhood vacancy rate
  • median income in the student’s neighborhood
  • education level of parent(s)
  • whether the student was raised by 1 or 2 parents
  • whether English is the student’s first language
  • level of rigor of the curriculum at the student’s school
  • free lunch rate at the student’s school
  • whether the student had the opportunity to take AP classes

Translation: “We know this exam is discriminatory, but instead of changing it, we’ll just let your dream school know that you grew up in a ‘bad’ neighborhood and that your mom only has a GED and works as a cleaning lady. This way, you still have a chance as one of their ‘diverse’ candidates.”

No. Nope. No, no, no.

Maybe I translated it wrong. Let me try again.

Translation #2: “This exam, and the entire college admissions process, was designed to elevate people who are already privileged. You aren’t one of them, so we don’t expect you to do well. But don’t worry; we’ll let your dream school know that you could have gotten a better score if only you hadn’t grown up over there.”

Am I doing something wrong here? Okay, third time’s a charm.

Translation #3: “We understand that there are many other measures of your academic potential. However, we won’t be incorporating any of them into our exam. Don’t worry, though; we know that your parents couldn’t afford a private tutor, so we’ll just let your dream school know about that.”

GET OUT.

Let’s dissect this ridiculous exam by taking a look at what makes an assessment reliable and valid.

Validity refers to whether an assessment is measuring what it’s supposed to measure. For example, let’s say we’re giving an English language assessment and students are asked to write an essay about the proper way to maintain certain farm equipment. Students who have never been to a farm might score poorly on this section of the exam, but not because of their English proficiency — because of the bias present in the prompt.

Sure — you can attach an “adversity score” to this question and say that this student grew up in a city and his parents didn’t have enough money to expose him to farm equipment, but how does that make the assessment any more valid?

Reliability refers to the consistency of an assessment. In other words, if you take this exam every day for a week, will your score vary much? If yes, we have a problem. Reliability also refers to consistency within the exam itself. For example, if a student is asked to choose the past tense form of a verb in questions 5, 8, 12, 14, and 23, the student should get all of them right, or all of them wrong.

If an assessment is valid and reliable, we know it’s measuring what it’s supposed to measure, and we know that the questions are free of implicit and explicit bias.

The SAT meets neither of these requirements.

This is why wealthy parents spend a fortune to help their kids memorize SAT vocabulary and to drill SAT math. This is why schools in wealthy districts ensure that students spend a year or more preparing — because the exam is not reliable or valid, and it takes a ridiculous amount of preparation to “hack” your way to a near-perfect score.

How can an exam be reliable and valid when the scores increase as a family’s income increases? How can an exam be reliable and valid when Asian students score significantly higher — and white students also score higher — than Black and Latinx students?

The College Board created the “adversity score” in a pitiful attempt to rationalize the implicit bias that makes their exam unreliable in the first place.

Simply put: this exam wasn’t designed for everyone, and instead of doing something about it, they’d rather blame neighborhood crime rates, single parenthood, and free lunch for the measurable racial and economic disparities that are evident in the scores.

I’m not buying it, and you shouldn’t, either.

Hey, College Board: Let me know when you decide to make exam retakes free, and when you stop allowing companies to make a fortune on test prep services. You chose to license your exam material to test prep companies, earning a nice commission.

Let me know when you take steps to eliminate the bias that exists in the exam itself.

Let me know when you figure out how to design an exam that a capable student could get a perfect score on, even if she grew up in a homeless shelter and went to a school that didn’t have computers and iPads.

Let me know when you stop scrapping questions that Black students score highest on in favor of questions that white male students score highest on, and then trying to control for socioeconomic factors with an “adversity score”.

Basically, let me know when you actually do something.

Until then, I don’t want to hear it.

technical writer • site reliability engineer • engineering leader • all views are my own • kerisavoca.com 👩🏻‍💻

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store