Monday, 21st July, 2014
In my note Applying financial analyst John Moody's rating system to wine show judging I discussed how the judging or grading of wine is fraught with difficulties. I then pondered if a better system was possible and found encouragement in the methods of John Moody in his rating of financial paper.
To restate the problem in a different way, consider two teams tasting the same set of wines in different rooms at the same time. What we know is they will come up with different results. The results will diverge further if the wine classes do not have a narrow focus; and for example included wines from all countries, many varieties and many vintages.
That Moody article mention was made that knowing such things as the region, the provenance, the lineage or the length of history of the wine, its ability to age and the maker, would give an extra dimension to the tasting and such factors need exploring.
Great wines have taken their position in the wine hierarchy after countless assessments by the market place over many years and such considerations cannot be dismissed.
So let us return to the Winestate tasting. 'World's Greatest Syrah and Shiraz Tasting', of 582 wines.
I have been reading Winestate for decades and know its style pretty well. I confess though that despite the process of judging and scoring wines being clearly set out I have always thought something was missing, no doubt because of my sceptical view of most things.
Winestate say: " ..the wines are not know to judges. The three judges taste the wines blind and assign a score without reference to each other. Only then do they compare scores, and if there is dissension they re-taste the wines and come to an agreement. Scores are compiled using the 20 point international system... These final 'medals' are then converted into a star rating system... A gold medal means 5 stars, silver is 4 stars and bronze is three stars."
The star rating also includes 3.5 stars and 4.5 stars, which in my view mean it is a ten point scoring method. It is worth remembering that Winestate is a buying guide and to my knowledge they do not review wines with scores lower than three stars, presumably because these are not worth drinking.
It worries me that judges use the 20 point system to make assessments and this is converted backwards into the published five star system. Doesn't it make more sense to judge from the start using stars, or publish the score out of 20.
It is also stated: "Wine judging is an inexact art, not a science-even at the highest levels of proficiency. Accordingly, Winestate uses the star rating system which reflects a range, rather than a specific point score. Point systems indicate a level of accuracy that simply does not exist."
Then to clarify a star versus a point score, Winestate produce a table which shows for example a 4 star wine equals, 17 -17.9/20 which becomes 94-95/100.
Here is the Winestate comparison table.
This triggers another worry. Scaling up a 20 point system to 100 points means 4 stars equalling 17-17.9 should become the range 85-90. As I understand it the Winestate comparison table illustrates how the stars compare to magazines which use the 100 point method. Thus four stars undergo a correction factor and become 94-95.
Further clues about the judging process were set out in the Winestate editorial by Peter Simic.
"Of course the problem we face at Winestate, and the reason why our scores tend to be lower, is that we use the International 20 points judging system with three judges tasting each wine blind. Then we use the Winestate \'majority rules\' system where rather than averaging scores, two judges have to recommend the wine and the closest two scores go through. So, for example, if two judges give a wine 15/20 and the third gives it 18 points it is out because two judges have said it is out, rather than averaging up."...
I see no logic in the 'majority rules' method which does at least confirm my suspicion that the results instead of being fearless are massaged. Why not average the three tasters, and where is the case that this 'majority rules' leads to more accurate judging? It strikes me as the opposite as if the judges are equal how can one judge be dismissed even if that score is at a large variant to the others. [Note: Similar pressure or correction is used in Show Judging as well.]
All this confirms is that stars and numbers are a useful buying guide but customers must remain wary. I also have the view that for the five stars rating to offer 'a range, rather than a specific point score' the divisions of 3.5 and 4.5 stars should be excluded as the divisions between 15.5 and 20.00 are not stepped evenly when they are re-calibrated to stars.
The editorial also says:
..."At the higher priced level it is like 'taking the Rolls out on a dirt track for a spin', ...without giving extra points for the provenance of history and reputation. But of course we cannot make allowances for some iconic brands when all our wines are judged blind."
I have already offered caution that not taking into account the pedigree of the best wines can lead to very odd results and I often remark that at the top level it is not the wine being judged so much as the judges themselves.
So a total of 17 different judges worked their way through 582 wines over 6 days. The large number of judges also worries me and adds another random factor to the final result. For example did each panel taste a random selection across all price points or did they know the price category they were judging. We are not told.
My difficulty over decades of reading Winestate is a simple one and revolves around the wine price. Surely a four star wine at a low price cannot be better than a three star wine selling for say ten times the price, yet we are told this is not so.
So what do we find? A few bargains were unearthed at the low price end with 4.5 stars going to Brookland Valley, Taylors Promised Land and Red Knot McLaren Vale while 4 stars were given to Shot in the Dark, Johnny Q, Wolf Blass Red Label and a few others.
Going back to the Winestate table I read 4.5 stars equates to 18-18.4 and 96-97 on the 100 point scale. These are indeed bargains, but what room is left for the far more expensive great wines of this challenge?
So I turn to the eight wines priced over $200 to find two with 5 stars, one with 4 stars and 5 with 3 stars. The 3 star bracket caught some beauties including Guigal (France), Penfolds Grange 2008 and Hill of Grace 2008.
So as a consumer I am expected to believe that Grange, which the Wine Advocate for example gave 100/100, is a lesser wine than a humble Wolf Blass Red Label.
As Winestate has said all wines are judged equally I graphed the results.
Much of the towering edifice of the wine business is built on the basic idea that expensive wines will taste better. Thus the graph should rise as the wines increase in price.
Thankfully I am saved from having to offer a detailed view of the results as my experience allows me to take the easy path by saying I do not believe the results of this tasting.
I turned back to John Moody to see what he might have advised. Moody charged the investors of financial paper for his company's opinions; advice worth paying for if it helps in avoiding losses. Built into the grading is another useful feature as it can assist in deciding your appetite for risk. In 1970 Moody's joined similar firms in also charging the issuers of financial paper for their helpful ratings.
To walk both sides of the street is very hard and I am ever mindful that Moody's with a bunch of others had some role to play in the recent GFC debacle. Collecting money from the issuer and the investor it seems can lead to the corruption of ratings.
Thus I will be very careful about how I use Moody in thinking about wine tasting.
I have the increasing suspicion that some large International tastings are beginning to walk both sides of the street. Tastings have evolved from a service to wine makers, to being offered for a small charge to assist consumers, to the current vision of a global event that can be a useful money maker, with fees being gathered to enter while charging for the use of the results.
Winestate of course has always been scrupulously fair with their assessments but whether the results of this Shiraz tasting are helpful is what this article is about.
I recall that the Winestate results came out about the time Wine Australia ran its Savour Australia 2013 programme with guest arriving from overseas to listen to experts explaining why they should buy Australian wine. I wonder if any visitors stopped to think why two of our most famous wines, Penfolds Grange and Hill of Grace, had put up such a miserable showing in the country's premier wine magazine.
I have wondered for a long time about the point of large omnibus wine tastings and those saying 'we assess all wines equally and masked', seems to offer no better approach than that of the marketing genius Robert Parker and others who knew it made sense at the top end to know what they were tasting.
As you move from agricultural shows which helped instruct amateur makers how to avoid faults to shows becoming part of marketing so they must constantly evolve. I have also learnt that having forthright opinions on wines can leave you badly exposed and to stare down the market place is very risky.
Keeping things in proportion is often hard to do and I do this by reminding myself that wine is just a drink.
No comments:
Post a Comment