Reviews have always gnawed at me. Maybe not overtly, but every time I opened a copy of EGM or OXM, skimmed over to the reviews, and narrowed my eyes down on the score box, there was something that felt off. At the time it wasn’t a big deal—all of the information I need to know about a game in a genre I don’t care about is neatly summed up in a nice, small box with an easy to digest number. Bigger reviews for more interesting games had a better chance of hooking my eyes before the traipsed over the painfully crafted essay on whatever merits or demerits were worth noting, especially if it was a title I actually had an interest in. Halo 3? Hell yeah I’m going to read that. Cabela’s Outdoor Adventures? Don’t care. Check out the score and move on. The review’s only saving grace is that it’s less than 200 words long.
Reviews should grab the reader’s attention, though. There’s a reason why that wall of text exists, why we write a thousand words tearing apart every last detail. That blurb referencing back to a developers past accomplishments was researched in order to accurately convey what we feel is vital information. Yet it all boils down to a simple question: should I buy this game? Now, that question doesn’t have an easy answer. We wouldn’t write reviews if it did. But the information has to be presented in a condensed enough form so that we reach the broadest reader base possible.
Including scores, as necessary as they may be, presents its own problem. First off, how do you decide what scale to use? Do you go with something akin to Giant Bomb’s five star scale? OXM’s ten point scale? How about 100 points? Well, then you have to figure out how you’re dividing the scale up. At NextLevel, we use a 100 point scale that’s similar to a class grading scale. Points match up with the A through F grades, with anything lower than a 60 qualifying as a failure. Not all score systems are perfect, and I think it’s fair to say that ours has some oversights. Namely, most games are going to get an 80—a problem prevalent in a number of outlets where 80 equals “good.”
But I digress. All of these scoring systems share the same problem: people want a quick barometer for whether or not a game is good. All of this text is nice for people who want a nuanced argument, but the easiest way to determine whether a game is good or not is to just hop on Metacritic and check out the aggregate, and maybe read the blurbs from whatever outlets they were able to pull from. Scores make filing away reviews into neat and easy-to-digest numbers very simple. In a perfect world, or at least my perfect world, the review would exist on its own. Deciding whether or not a game is worth your time involves a number of factors; personal time available to play, amount of money available to spend, interest in any number of qualities (genre, developer, franchise) all play a role in whether or not a person is going to purchase it. When review scores come into to play, a large portion of that process goes away.
Operating as a smart consumer demands that the individual stays up to date on coverage of the industry, but only to a point. Good developers—and by extension good games—should come out on top, and for the most part will, even if people place a large emphasis on review scores. So, as much as I may dislike them, they’re not the scourge of the industry. Yet the industry could be so much richer if the conversations weren’t cut short in order to facilitate less involved decisions.