Showing posts with label wine judging. Show all posts
Showing posts with label wine judging. Show all posts

Tuesday, September 2, 2014

Which is Better - Young Wine or Aged?

From my brother David's Glug website where you should all be looking for wine value.

Which is Better - Young Wine or Aged? 
Wednesday, 27th August, 2014  - David Farmer 
When you watch wine judges at work you will note time and time again that younger, fresher wines receive higher scores than older wines. It is also usual in vertical tastings of famous wines to observe younger wines receiving scores as high as or higher than the venerable, classical vintages.
In long, vertical tastings covering many decades the famous classical vintages with high scores will have on either side wines with mid-level scores yet as the wines get younger the scores slowly rise with a pronounced uplift for the best of the recent vintages.
Why is this? Two possibilities I think; younger wines are agreeable as the fresh aromas and vibrant, lively, sappy, primary tastes are very appealing. For a judge what they see in the glass on that day is what they must judge and score. They may well mark down a wine that has a long future because it lacks current appeal or charm even though it may have the structure to evolve. It is worth noting here that many judges believe a wine which will age well will also be appealing when youthful and there is much truth in this view.
The second reason is that young wines have similar colours, aromas and pleasant fruity tastes and it is not so easy to split out those that are distinctively better and even harder to predict which wines have a long future. Experts may well tell me I am wrong but I have been studying the results of judging for many decades.
Which wines age well and move to a level of interest beyond the delights of the primary flavours takes two to four years to be revealed. Naturally there is more certainty in wines from notable regions which have been well studied and the wines have a long provenance. I refer here of course to wines like the classified growths of Bordeaux.
In general I think pretty well all whites are best drunk young while better reds can improve - change is a better expression - but be cautious about the long term potential.
Recently I reviewed a Winestate tasting of Shiraz here and a table of the results shows in a striking manner the influence of youth in scoring.

Circled are the top scores for the cheaper Shiraz brackets which illustrate how young wines are frequently favoured by wine judges.
I have circled the following points.
1) The top score, averaging 3.87 stars in wines selling for $10-$15 was shiraz and shiraz blends from 2012 with 12 wines tasted and 12 wines rated;
2) The top score, averaging 4 stars in wine selling for $15-$20 was shiraz from 2012 with 12 wines tasted and 12 wines rated.
3) The top score, averaging 4.08 stars in wines selling for $20-$25 was shiraz from 2012 with 10 wines tasted and 6 wines rated.
Please note the lower scores for each price division are usually for older vintages.
This tasting well illustrates the points made in this shopping guide.

Monday, July 28, 2014

Judging the methods of a wine magazine's wine judging

Reprinted from Glug website - the recommended place to buy wine on the web
Monday, 21st July, 2014  - David Farmer 
In my note Applying financial analyst John Moody's rating system to wine show judging I discussed how the judging or grading of wine is fraught with difficulties. I then pondered if a better system was possible and found encouragement in the methods of John Moody in his rating of financial paper.
To restate the problem in a different way, consider two teams tasting the same set of wines in different rooms at the same time. What we know is they will come up with different results. The results will diverge further if the wine classes do not have a narrow focus; and for example included wines from all countries, many varieties and many vintages.
That Moody article mention was made that knowing such things as the region, the provenance, the lineage or the length of history of the wine, its ability to age and the maker, would give an extra dimension to the tasting and such factors need exploring.
Great wines have taken their position in the wine hierarchy after countless assessments by the market place over many years and such considerations cannot be dismissed.
So let us return to the Winestate tasting. 'World's Greatest Syrah and Shiraz Tasting', of 582 wines.
I have been reading Winestate for decades and know its style pretty well. I confess though that despite the process of judging and scoring wines being clearly set out I have always thought something was missing, no doubt because of my sceptical view of most things.
Winestate say: " ..the wines are not know to judges. The three judges taste the wines blind and assign a score without reference to each other. Only then do they compare scores, and if there is dissension they re-taste the wines and come to an agreement. Scores are compiled using the 20 point international system... These final 'medals' are then converted into a star rating system... A gold medal means 5 stars, silver is 4 stars and bronze is three stars."
The star rating also includes 3.5 stars and 4.5 stars, which in my view mean it is a ten point scoring method. It is worth remembering that Winestate is a buying guide and to my knowledge they do not review wines with scores lower than three stars, presumably because these are not worth drinking.
It worries me that judges use the 20 point system to make assessments and this is converted backwards into the published five star system. Doesn't it make more sense to judge from the start using stars, or publish the score out of 20.
It is also stated: "Wine judging is an inexact art, not a science-even at the highest levels of proficiency. Accordingly, Winestate uses the star rating system which reflects a range, rather than a specific point score. Point systems indicate a level of accuracy that simply does not exist."
Then to clarify a star versus a point score, Winestate produce a table which shows for example a 4 star wine equals, 17 -17.9/20 which becomes 94-95/100.
Here is the Winestate comparison table.
This triggers another worry. Scaling up a 20 point system to 100 points means 4 stars equalling 17-17.9 should become the range 85-90. As I understand it the Winestate comparison table illustrates how the stars compare to magazines which use the 100 point method. Thus four stars undergo a correction factor and become 94-95.
Further clues about the judging process were set out in the Winestate editorial by Peter Simic.
"Of course the problem we face at Winestate, and the reason why our scores tend to be lower, is that we use the International 20 points judging system with three judges tasting each wine blind. Then we use the Winestate \'majority rules\' system where rather than averaging scores, two judges have to recommend the wine and the closest two scores go through. So, for example, if two judges give a wine 15/20 and the third gives it 18 points it is out because two judges have said it is out, rather than averaging up."...
I see no logic in the 'majority rules' method which does at least confirm my suspicion that the results instead of being fearless are massaged. Why not average the three tasters, and where is the case that this 'majority rules' leads to more accurate judging? It strikes me as the opposite as if the judges are equal how can one judge be dismissed even if that score is at a large variant to the others. [Note: Similar pressure or correction is used in Show Judging as well.]
All this confirms is that stars and numbers are a useful buying guide but customers must remain wary. I also have the view that for the five stars rating to offer 'a range, rather than a specific point score' the divisions of 3.5 and 4.5 stars should be excluded as the divisions between 15.5 and 20.00 are not stepped evenly when they are re-calibrated to stars.
The editorial also says:
..."At the higher priced level it is like 'taking the Rolls out on a dirt track for a spin', ...without giving extra points for the provenance of history and reputation. But of course we cannot make allowances for some iconic brands when all our wines are judged blind."
I have already offered caution that not taking into account the pedigree of the best wines can lead to very odd results and I often remark that at the top level it is not the wine being judged so much as the judges themselves.
So a total of 17 different judges worked their way through 582 wines over 6 days. The large number of judges also worries me and adds another random factor to the final result. For example did each panel taste a random selection across all price points or did they know the price category they were judging. We are not told.
My difficulty over decades of reading Winestate is a simple one and revolves around the wine price. Surely a four star wine at a low price cannot be better than a three star wine selling for say ten times the price, yet we are told this is not so.
So what do we find? A few bargains were unearthed at the low price end with 4.5 stars going to Brookland Valley, Taylors Promised Land and Red Knot McLaren Vale while 4 stars were given to Shot in the Dark, Johnny Q, Wolf Blass Red Label and a few others.
Going back to the Winestate table I read 4.5 stars equates to 18-18.4 and 96-97 on the 100 point scale. These are indeed bargains, but what room is left for the far more expensive great wines of this challenge?
So I turn to the eight wines priced over $200 to find two with 5 stars, one with 4 stars and 5 with 3 stars. The 3 star bracket caught some beauties including Guigal (France), Penfolds Grange 2008 and Hill of Grace 2008.
So as a consumer I am expected to believe that Grange, which the Wine Advocate for example gave 100/100, is a lesser wine than a humble Wolf Blass Red Label.
As Winestate has said all wines are judged equally I graphed the results.
Much of the towering edifice of the wine business is built on the basic idea that expensive wines will taste better. Thus the graph should rise as the wines increase in price.
Thankfully I am saved from having to offer a detailed view of the results as my experience allows me to take the easy path by saying I do not believe the results of this tasting.
I turned back to John Moody to see what he might have advised. Moody charged the investors of financial paper for his company's opinions; advice worth paying for if it helps in avoiding losses. Built into the grading is another useful feature as it can assist in deciding your appetite for risk. In 1970 Moody's joined similar firms in also charging the issuers of financial paper for their helpful ratings.
To walk both sides of the street is very hard and I am ever mindful that Moody's with a bunch of others had some role to play in the recent GFC debacle. Collecting money from the issuer and the investor it seems can lead to the corruption of ratings.
Thus I will be very careful about how I use Moody in thinking about wine tasting.
I have the increasing suspicion that some large International tastings are beginning to walk both sides of the street. Tastings have evolved from a service to wine makers, to being offered for a small charge to assist consumers, to the current vision of a global event that can be a useful money maker, with fees being gathered to enter while charging for the use of the results.
Winestate of course has always been scrupulously fair with their assessments but whether the results of this Shiraz tasting are helpful is what this article is about.
I recall that the Winestate results came out about the time Wine Australia ran its Savour Australia 2013 programme with guest arriving from overseas to listen to experts explaining why they should buy Australian wine. I wonder if any visitors stopped to think why two of our most famous wines, Penfolds Grange and Hill of Grace, had put up such a miserable showing in the country's premier wine magazine.
I have wondered for a long time about the point of large omnibus wine tastings and those saying 'we assess all wines equally and masked', seems to offer no better approach than that of the marketing genius Robert Parker and others who knew it made sense at the top end to know what they were tasting.
As you move from agricultural shows which helped instruct amateur makers how to avoid faults to shows becoming part of marketing so they must constantly evolve. I have also learnt that having forthright opinions on wines can leave you badly exposed and to stare down the market place is very risky.
Keeping things in proportion is often hard to do and I do this by reminding myself that wine is just a drink.

Sunday, May 11, 2014

The classic tale of the wine judging nonsense

An excellent retelling of the exposure of the nonsense that is wine judging in the South China Morning Post (reprinted from the London Daily Telegraph) recently.It tells the story of American statistician turned wine maker Robert Hodgson who analysed how different so-called wine judging experts differed so widely about the quality of wines.
An example:
"I looked at a set of data that showed the scores for wines that were entered into as many as 13 different competitions," he says. "I tracked the scores from one competition to another. There were, like, 4,000 wines that I looked at. Of all the ones that got a gold medal, virtually all got a 'no award' some place else. It turns out that the probability of getting a gold medal matches almost exactly what you'd expect from a completely random process."
Other gems recalled in this piece that it well worth reading in full:
One French academic, Frédéric Brochet, decanted the same ordinary bordeaux into a bottle with a budget label and one with that of a grand cru. When the connoisseurs tasted the "grand cru" they rhapsodised its excellence while decrying the "table" version as "flat". In the US, psychologists at the University of California, Davis, dyed a dry white various shades of red and lied about what it was. Their experts described the sweetness of the drink according to whether they believed they were tasting rosé, sherry, bordeaux or burgundy. A similar but no less sobering test was carried out in 2001 by Brochet at the University of Bordeaux, in France. His 54 experts didn't spot that the red wine they were drinking was a white dyed with food colouring.