In a typically scintillating cross-talk on the changing culture of serious television, the AV Club‘s Noel Murray tosses out a real money quote:
Over the past few weeks, a number of our film-critic pals have been bickering over which acclaimed new movie is “overrated,” and whether the profession is dying because they didn’t get an advance screening of Killers, and whether watching movies on an iPod is a crime against art, and whether the reviews (not the movie, mind you, the reviews) of Sex And The City 2 are misogynistic. Not only are most of these debates depressingly insular, they’re old. We’ve been having these same boring conversations for years now, with fewer and fewer participants.
With TV, on the other hand, “because we’re usually talking with people who are already watching, we get to kick around symbolism, character development, and real-world connections to what’s on the screen, rather than just writing about whether the show is worth a damn.”
Lately, even academic institutions feel the need to make distinctions between “film reviewers” (consumer-reports types who issue evaluations of quality) and “film critics” (who toil to explicate subtext, context and other concerns). It’s clear that a field created by trial-and-error as much as anything has broken down to the point where film critics are expected to go plow their specialized terrain elsewhere and film reviewers are for The People.
So why does the general level of literacy and engagement in TV criticism tend to be higher, on average, than in film? Some of the reasons are already teased out by Murray and Tobias in that discussion. Event viewing is back with a vengeance, so people either commit to a show and want to talk about it a lot or they’re just not watching. They’re not going to get all up in arms and call someone an idiot for disagreeing about the professed objective quality of a product. They’re already committed; everything else is a reasonable disagreement and part of the discussion.
Another factor: TV has technically been with us since the ’30s, but it really only started taking off in the ’50s. The possibilities of extended narrative weren’t broached til the ’80s; relatively speaking, the medium’s in its infancy. Film’s landscape is more splintered than ever. The simplest dividing line for viewers may be how important a strong narrative is, and if it’s necessary at all. It’s almost impossible to imagine a consensus between arthouse denizens and anyone who celebrates a future of endless “Transformers” installments and high-concept Eddie Murphy vehicles.
TV series are the kind of the things that (by the very nature of the time commitment and so-far-limited avenues for narrative experimentation) pretty much everyone still watching after, say, the fifth episode can agree on. The “merit” of the work is pretty much the last thing on anyone’s mind by then, that question having been dispensed with long ago. And that enables actual criticism. It’s the very confining nature of what you can do with TV (so far, anyway) that allows for reasonable and interesting discussions to happen.
The problem with a lot of contemporary film writing isn’t that it’s fixated on the stale (mainly because if you can’t engage readers with specifics about movies you might as well go for the same talking points that everyone else is mulling over); it’s that it’s trying to perform the impossible task of placating a fictional audience’s taste and writing something interpretive. TV criticism can run free and wild. And, for the moment, that’s an unsolvable problem.
[Photos: TV Guide regional 1949 prototype, TV Guide; Consumer Reports, Consumers Union, 2007; “At The Movies,” Tribune Entertainment, 1982-90]