Food safety ratings emoji rylz1w

Last month in response to longtime demand, public health ratings were rolled out for Seattle restaurants. You know, those quick-glance stickers you see in the windows of restaurants in other cities informing you of their public health “grade.” Seattle’s are emojis signifying one of four grades: “Needs to improve,” “Okay,” “Good,” and “Excellent.” You can see them in our February Explainer, right here.    

Here’s the thing: With the exception of the lowest “Needs to improve” category—which signifies a fairly high level of gnarliness, including a health department closure and/or multiple return inspections that year—the three higher categories are graded on a curve, and a different curve for each zip code. This is done to correct for differences among different inspectors, each of whom concentrates efforts in a given zip code for a given period, and each of whom brings a slightly different sensibility to the task.  

If you thought restaurant public health inspections hewed to more quantifiable inspection criteria—so did we. Until Becky Elias, manager of the Food Safety Program at Public Health Seattle and King County, schooled us that perfectly consistent criteria are impossible where 45 inspectors are grading 12,000 businesses. “We know from our analysis that there’s a variation that occurs across inspectors,” she told me by phone. “Inspections include discretion. It’s a dynamic activity that varies based on the restaurant, on the severity of the risk, and so on. So with all of that, there’s a slight difference in how they score inspections.”

As everyone remembers from high school algebra class—or perhaps from working at Microsoft during its dreaded “stack ranking” days—the curve is a nefarious thing, evaluating performance not on the performer’s objective merits but on the performer’s performance relative to the others being graded. So how does this help the hungry consumer who is just trying to avoid a less-than-pristine place for dinner? Well, if that consumer is limiting choices to restaurants within a single zip code, he or she can make relative determinations—this “Good” banh mi shop is superior to the “Okay” noodle house next door.

Problem is, diners aren’t always choosing from restaurants within a single zip code. And if different zip code areas have different rubrics for what separates a “Good” from an “Okay”—we aren’t getting meaningful intel about how they compare. “Yes, there is a chance that you could have two businesses in two different zip codes with the same violations and the same point average across four inspections—and they might end up in a different category,” Elias confirmed.

Bummer for diner, yeah—but, arguably more, bummer for restaurateur. Since the mid-January roll out of this program, we've heard some unhappy feedback from restaurateurs on Capitol Hill who believe they’d be getting higher scores in other neighborhoods. 

The thing is, this may be the best we got.  Daniel E. Ho, a Stanford professor who has studied and documented restaurant grading in a dozen jurisdictions (and who helped create ours), defends the King County grading system in this Seattle Times piece. Among the points he makes: “If every Bellevue restaurant received the top grade (which is actually the case in places like San Diego, where some 99.9 percent of restaurants earn As), that disclosure would offer little help to Bellevue diners hoping to distinguish restaurants on food safety.”

Beyond that, what exactly does this flawed system do for diners? Well, it does unequivocally reveal restaurants with the unfortunate “Needs to improve” rating—the lowest rating, not subject to the curve, and arguably the most meaningful for diners deciding which spots to avoid. Elias also reports that grades are given based on no fewer than four unannounced health inspections—giving Seattle a higher bar than anywhere else in the country. Which means that the grades will be pretty solid reflections of reality, at least where the rubric offers the most definitive judgments—at the very top and very bottom ends.

As for that squirrely middle ground, Elias somewhat wearily offers this. “We know the public wants a rating system, but the truth is, the inspections process wasn’t designed for the sake of a rating system. It was designed to address public safety issues, and it is very effective at doing that. We’re very skilled at outbreak investigations, at stopping them, at preventing them from spreading. That’s what our inspection system was designed to do.”