Using the Advocate’s 100-point system, Antonio Gallioni, the WA’s California expert, awarded 95 points or above to 223 California wines, about 1/4 of the wines receiving published reviews. (The reviews for low scoring wines are not released). Gallioni’s boss Robert Parker gave 100 point scores to 17 wines from the Northern Rhone’s 2009-2011 vintages, which follows on the heels of the 19 100-point scores he gave to the 2009 Bordeaux vintage. I haven’t kept track but Mike Steinberger reports that Parker has given out at least 53 100-point scores since March.
Apparently, grade inflation has migrated from Harvard to Napa.
Many commentators cried foul arguing that this grade inflation made wine scores meaningless. Parker took to his blog to defend his journal’s scoring claiming that wines are simply getting better and the high scores reflect this general improvement.
Mike Steinberger had the most thoughtful critique.
But I think one reason Parker and Galloni have encountered so much skepticism is that if you accept the 100-point scale as a quasi-objective means of assessing wines—and it seems to me that if you buy into the 100-point thing, you are necessarily accepting the idea that it is a quasi-objective standard—then the sheer number of wines clustered at the top of the scale simply isn’t credible.
But I’m not sure I follow Steinberger’s reasoning.
If Parker is right that winemaking has improved since the great but rare Bordeaux vintages of the past, such as the 1947 Chevel Blanc or the 1961 Latour, then clustering at the top of the rankings is precisely what we should expect. Furthermore, Parker’s claim is plausible. Improvements in wine-making technology, advances in the science of winemaking, and the increasingly competitive nature of global wine markets have all contributed to a glut of good wines. There is no reason to think this doesn’t effect the very top of the scale.
Nevertheless Steinberger thinks the scores are excessive:
But even if the overall quality of wines is better, it doesn’t follow that so many wines should be receiving eye-popping scores. If the competition is much tougher now than it was 10 years ago, it shouldn’t be easier to get 96 or 97 points; it should be harder…. Forgive the tautology, but if the bar has been raised, you need to raise the bar. You can do that one of two ways: by lengthening the scale—making the highest score, say, 110 points rather than 100—or by tightening the standards within the 100-point framework to reflect the fact that the quality is so vastly improved. If you don’t do either of those things, you end up in a situation like the one that Parker and Galloni are now confronting—with your reviews being greeted mainly with cynicism and derision.
But why would such “grading on a curve” be more objective? If wine quality really has improved, objectivity would demand that improvement be reflected in the number of highly rated wines, if by “objectivity” we mean something like “tracking the truth”.
It seems to me what Steinberger and other critics are worried about is not objectivity or the erosion of standards. Rather they are worried about the loss of the aura of scarcity surrounding high scoring wines. When perfection becomes the norm it is no longer valuable as a measure of greatness or prestige. Apparently wine still requires an aristocracy.
Steinberger proposes to correct this by lengthening the upper range of possible points or tightening standards within the 100-point limit. But neither solution will enhance the credibility of the system.
If the 100-point system has a flaw it is that point gradations assume more precision than the practice of wine tasting allows. Is there an explainable, projectable difference between a 99 and 100-point wine that can be extended over time so that every time the 100 point wine is tasted it is one point better than the 99? I doubt it. Yet, both of Steinberger’s solutions presuppose an ability to make even more fine-grained judgments about wine quality because either the scorer has more points for which criteria must be found or she must find reason to exclude wines that are otherwise worthy in order to tighten standards. But it is precisely the absence of criteria on which to base scores that is the problem; it can’t be solved by demanding more criteria.
Wine scores have a particular but limited meaning. They measure an expert’s level of enthusiasm for a wine at a particular time and place. This is useful information for a wine buyer to have. But the scores do not precisely track fine gradations of wine quality.
This problem with precision is especially acute with the 100-point wines under discussion. Wines which are that impressive are not going to have obvious flaws (at least when their future performance is taken into account) and will not fail according to the standard criteria used to judge wine. Unlike 90-point wines, 100 point wines will have plenty of balance, structure, power, intensity, elegance, finesse, body, complexity, etc. (at least when their performance is projected into the future) Thus, judgments at this level are bound to be searching for criteria but without a fixed standard to rely on and nothing to which a numerical value can be attached.
An analogy with art criticism might be helpful. Contrary to popular opinion, it is possible to grade art. Art professors do it routinely. This is because much student art will satisfy or fail to satisfy standard criteria used to evaluate art and numerical values can be assigned to those criteria. Such grading is not “perfectly objective” whatever that would mean, but the grades nevertheless have meaning if the grading standards are clear.
But it does not follow from this that we can usefully assign numerical grades to the work of Monet, Rembrandt, or Picasso. At this level the ordinary criteria we use to assign a grade to works of art don’t apply. The works of the masters are not lacking in those dimensions that afflict student art. Instead, what we get is a unique vision, something original and incomparable that cannot be captured by a set of standards or criteria. The assignment of a numerical grade would be pointless.
Similarly, for wines that fall into the 95+ range, I doubt there is much point in assigning numerical value. If they lack originality or consummate expressiveness they don’t belong there. If they have it, they cannot be readily compared.
If inferior wines are gaining admittance to the pantheon of legends, we have reason to be skeptical of the 100-point system. But if these wines are indeed worthy when compared to the best from the past, the lack of a scoring system that makes precise qualitative distinctions is not a worry. It is what we should expect from an expanded pantheon of legends. Grade inflation is only grade inflation if the higher grades are undeserved.
This of course makes the 100 point system useless for comparing the best wines, which will make neither Parker nor the investors who place great stock in his scoring system happy.
And if great wines are less rare than they were in the past, isn’t that a cause for celebration? Or is this democracy still pining for aristocracy?