The uproar Armond White raised by panning “District 9” has raised a lot of interesting points about The State of Film Criticism. It prompted Slate‘s Daniel Engber to fret over his original pan of the film; being one of the few dissenting voices on review aggregator Rotten Tomatoes was, he wrote, “beginning to make me nervous.”
So Engber did a little number-crunching and ranked 20 prominent critics from most to least contrarian based on how often they agreed with the Tomatometer. No surprise, White’s the most contrarian — but even he only went against the group 50% of the time. Everyone else spreads out from there to 83% (where the AV Club‘s Keith Phipps sits). Engber then proceeds to extrapolate conclusions (“successful” — presumably meaning employed — critics can neither agree or disagree with the consensus too often) and wonder if critics keep track (consciously or otherwise) of their rate of dissent.
What strikes me about the question is a) its meaninglessness b) the fact that it’s presumably acceptable to draw conclusions through mathematical evaluations. White’s 50% ratio doesn’t (necessarily) mean he’s calibrating his opinions against the mainstream, just that they’re literally coin-toss random as to where he’ll fall each time. Beyond that, Engber’s definition of pro critics is awfully narrow — it’d look a lot different if it included, say, Cinema Scope‘s Mark Peranson or the Village Voice‘s J. Hoberman — critics with the luxury of choosing what they write about, implicitly rejecting the mainstream.
Almost exactly 11 years ago, Slate ran a little rant by Jacob Weisberg about, yes, The State of Film Criticism, called “Uncritical Critics” and prompted by Warner Bros.’ “unprecedented step…of refusing to let reviewers see ‘The Avengers’ before it was released.” (Ah, for the days when refusing to let critics see a movie in advance was unprecedented.) Skip over Weisberg’s now deeply dated complaints and notice what’s not there: a single measuring of presumably objective consensus that works for most audiences vs. individual reviewers. Rotten Tomatoes was founded a whole two days before Weisberg held court — back when establishing consensus amongst critics involved the hard work of reading a bunch of reviews, seeing the movie, measuring those reviews against your opinion and so on. You know, the bad old days, when people read newsprint and your local critic’s word might be all you had to go on.
Rotten Tomatoes renders all this context irrelevant; it gives you an instant snapshot of the establishment, large and small, valid and vapid. You don’t have to read the review at all, much less follow a critic consistently over time, measure your sensibilities against theirs and gauge how their viewpoint is applicable to yours. That’s exactly what Jonathan Rosenbaum was talking about in 2001 in a holiday movie round-up, responding to a reader complaining “he could never tell from my reviews whether I was recommending a movie or not.” His eminently reasonable response: “Recommending particular movies is something I can do for friends or relatives, but trying to make recommendations for strangers — even though plenty of critics do — seems a little presumptuous. Why should strangers give up their own tastes and accept my interests and limitations as their own?”
But Rotten Tomatoes inverts that premise: recommendations are collective; you can gauge a critic’s reliability by virtue of mathematical objectivity. Once, you got mad at your local critic based on personal taste; now, you do it with quasi-scientific authority, backed by legions of anonymous reviews crunched into a mean decision. And that idea of aggregate authority vs. the hapless individual critic may be the biggest change in the reader/critic battle in the last ten years.