Wine Scores and Ratings – A Look Inside Wine Competitions
Wine consumers seeking valid assessments of exalted wines often look for high wine scores. But all that tells us is that a solitary wine critic lifted a glass of high-caliber Pinot Noir and moments later proclaims it to be worth 97 points.
How did that reviewer get to that conclusion? What were the parameters that led to 97 and not 98?
Surely there must be a better way to ascertain the quality of a wine, one with a built-in guard against a preference for one thing (high alcohol lots of oak) or a distaste for another trait (high acidity).
What Do Wine Ratings Mean?
In fact, the solo critic’s score often is presented in a vacuum. What we’re never told is:
- Was the critic influenced by the wine’s brand pedigree, varietal, prestigious appellation, or high price?
- Did the critic have a nasty cold, suffered from allergies, a bad night’s sleep, or an aching back?
- Was the critic a defendant in a lawsuit, taking pain medications, or recovering from a broken toe?
- Did the critic factor into the score any possible regional elements of the wine or simply award its score based solely on a hedonistic reaction?
How a wine score was arrived at is one key to explaining how valid it is, but almost never are we told anything about the process. And there are other ways to explain the caliber of a wine, such as the results of wine competitions, especially those that are properly run (details below) and have skilled judges.
The results of professional wine competitions benefit consumers who seek advice from experts in their quest to get great wines or good values. Properly strategized competitions also benefit wineries that want to know whether their output meets current standards of quality vis a vis other wines in the same category.
Wholesalers also benefit since they can determine whether they have priced their latest offerings appropriately. And others in the industry appreciate competitions because they may not trust number-driven single reviewers whose tasting strategies are completely unknown.
Use of the term “properly strategized competitions” above means that the organizers understand the pluses and minuses of the format(s) they have chosen and use checks and balances to deal with them.
For example, it may sound like a good idea to have a set of 10 judges on one panel evaluating wines of a unified category (say 140 2018 Cabernets), using a wine grading system, adding up the scores and dividing by 10. In this way the organizers of the event can come up with a rating scale for the highest-scoring wines.
But there are numerous drawbacks to such ideas. One drawback is logistics and the fact that 10 great judges are hard to find. And if a competition has 1,500 wines, it would need 15 days of judging 100 wines each day to complete the task.
Moreover, panels of 10 tend to create bell-shaped curves in which the average score is barely above a bronze medal. Averages usually lead to a great number of average wines all lumped into a hard-to-decipher grouping.
I have coordinated wine competitions since 1982 and can say that 10 judges on a panel is awful and that the best panel size is four persons. You might think this could lead to problems when votes among judges are tied, two votes for one medal and two votes for another. Such as two votes for gold and two votes for silver.
However, panels that use an odd number of judges (3, 5, 7, etc.) usually let the majority rule, but the pitfall of “majority rules” usually leads to mediocre results, especially if one or more of the judges is inexperienced at grading wines.
The benefit of panels with an even number of judges is that whenever ties occur, the best way to break deadlocks is for the panel members to discuss the wines and reach an agreement by discussion. Professional judges work to compromise and ties can almost always be resolved — sometimes by horse-trading.
Odd-numbered panels that rely on majority rules can often lead to terrible results, such as a flawed wine getting a medal.
Three-person panels pose a uniquely tricky situation when two persons have valid reasons to like a wine, and where the third panel member sees his or her power to deny the wine a medal of any color, for whatever reasons (typically the egotistical ability to simply wield power).
The best three-person panels I have ever witnessed and participated in were 11 events I have been asked to judge at Australian wine shows.
Australian vs. US Wine Competitions
Australia runs the finest wine competitions in the world, by far. One reason is that the judges on its three-person panels all are experts. Each one respects the spirit of the event in which the judges respect the votes of the others. Ego rarely enters the picture.
Moreover, the present method of three members per panel in the United States is far too simplistic for a major, world-class competition. It assumes that two mediocre palates, or people who aren’t skilled in seeing anomalous styles, are “better” than the third judge who may have a valid argument, even if it isn’t always mainstream.
I have faced many situations where I voted a wine a gold medal, and two others voted no award, and the wine in question wasn’t even given consideration for a bronze medal. It’s as if a valid argument for a gold medal is ignored by a “majority rules” kind of mentality.
This ignores the possibility that a slightly aberrant style of wine is distinctive enough to warrant a medal, which would reward a winemaker’s courage and adventurousness.
The critique I often get from those who prefer the “majority rules” system (such as three- or five-person panels) is, “What do you do with ties?”
Tie votes on four-person panels (such as two silver votes and two bronzes) often yield excellent results because the four-person panel is required to initiate a discussion that can eventually lead to a consensus. And the two-silver, two-bronze vote should, in most cases, be compromised to a silver, based on the fact that all four panel members voted for the wine.
The only scenario in which a bronze is the better result is where a silver voter sees a valid argument against the wine, or where a bronze voter realizes that his or her vote was weak and should be dropped.
A split vote, two gold and two bronze should not be an automatic silver medal. One side or the other usually tries to make a compelling case. Discussion is key. Single reviewers have no devil’s advocate.
It is also crucial to assess the number of wines each panel judges. Assume 40 Chardonnays must be judged. Is this too much of a burden for one panel to judge? Usually not. How about 55? Perhaps that’s too many. But 100 probably is too many. One goal of the competition should be to have each category judged entirely by one panel, with that panel advised that they should look for an appropriate percentage of golds.
In major multi-region competitions, the average percentage of gold medals is between 7% and 10%. With 40 Chardonnays, there should be at least 3 or 4 gold medals. It is feasible to have six or more golds if the wines display various styles. This admonition before the judging starts helps the judges to understand that awarding of only 1 or 2 gold medals to a class of 40 is probably a poor result.
When assessing the results of any wine competition, always ask who the judges were and how many judges there were per panel.
Aspects of Wine Award to Consider
- Does the wine awards competition use mainly professionals? I know the criticisms of wine makers – that they can be too technically demanding. But I believe such arguments are false. Most winemakers love wine and have a conscience and can be relied on to not to award flawed wines medals. And most also are eager to give a gold medal where warranted.
- Does the judging use panels of uneven numbers and rely on a majority-rules system?
- Does the event maintain a consistent wine grading system, or do they mix evaluation systems? If you use one system for one group of wines (medals) and another system for another group of wines (points), results may be bizarre.
- Are judges asked to evaluate wines by price? Such events lead to questionable results. For example, how are the categories low-priced wines, moderate-priced wines, and high-priced wines determined? Is a Chardonnay priced at $35 high-priced for medium? How does the competition deal with wines that are never discounted vs. wines that are always discounted?
- If one category is judged by price, all categories should be judged by price.
- Does the event historically award a lot of bronze medals? Bronzes should be discouraged, probably by limiting them to a percentage of each group judged. A bronze is rarely a winery’s finest moment.
- Are judges asked to judge huge classes? It’s daunting to ask a panel to evaluate 150 Syrahs.
- Are sparkling wines separated into separate groupings, such as Blanc de Noirs, French-American hybrids, Native-American, dessert, etc.
- Are fruit wines, flavored wines, and the like candidates to win sweepstakes awards, which go to the best wine in the competition? Formulated fruit and flavored wines are not the reason wine competitions exist.
- Are judges regularly asked to judge more than 150 wines a day? About 100 wines per day is plenty for most judges, who should also get time for ample breaks.
- Are judges asked to judge only one type of wines, such as only reds or only whites? The best panels get a variety of wines – a mix of reds, whites, and sweets.
- Are there palate cleansers? Cleansers need not be fancy, but filtered or bottled water as well as some form of protein and fat (such as cheese, olives, bread and crackers).
- What sort of glassware is used?
- Is lighting good? Fluorescent lights usually are terrible because they remove color from wine and make reds look older than they are.
- Do wines that get votes of S+, S+, S+, S+ end up automatically getting silver medals? Such wines are gold medal candidates! In fact, any wine that gets two gold medal votes is a gold medal candidate.
Other Wines Competition Factors
I believe no judge should ever justify a gold medal vote by saying, “I love it,” or “I could sell the hell out of this.” Judges should be prepared to justify their wine scores by answering the question, “Why, specifically, is this a gold medal?”
Wines that are considered to be odd, weird, or strange may wind up with no award. If one panel member likes such a wine, a senior judge should determine if there’s a possibility that it displayed a regional characteristic or a blending decision that made the wine the way it was.
I also believe that inflexible judges must be told the facts of life: this is not life and death, and wines should not be held to some mythical nirvana-esque standard. Excellent wines with great characteristics that fit the variety, the vintage, and the region are gold-medal candidates.
I also dislike it when a judge asks, “Are medals given for how the wines are now or how they’ll be in years?” Such folks should not be wine judges! Great wine is balanced, which means it can be consumed now and for some additional time.
I detest the situation where a judge hears two votes similar to S+ and G — then votes “no medal” without any justifiable reason. All good judges should respect other judges’ opinions until a discussion can take place.
If, for example, a wine has no noticeable flaws, such a vote is usually mere petulance.
I’m also a foe of judges who bad-mouth light-colored red wines that otherwise have no other drawbacks. Consumers don’t drink color.
I’m also wary of judges who rarely vote gold medals for any wine. In most cases, such judges cast about 75% of their votes for bronze medals. Such judges place themselves above the wines. Hubris has no place in wine scores and competitions.