Wednesday 25 November 2009

Another Perspective on 2008 WADA Laboratory Statistics

When the Honourable John Fahey presided, last November 2008, as Chair of the WADA Executive Committee, his remarks included two important concepts.

First, as to the problem or situation of full Signatory 'compliance', that WADA had “... certainly watered down the black and white approach that many believed should have been taken...”, and secondly, how results achieved via the various Floyd Landis cases proved “... that the system was there and worked properly...”. Our Ww list (of 'Landis Decisions') includes USADA I, USADA/WADA II, AFLD and the settled US Fed. Court case, not to mention the 'Hacking' case still waltzing through the French court system (although Mr Fahey may not have been counting beyond the three US-based cases). Of course, the 'system' to which Mr Fahey refers is the entire WADA system. A question arises, however, as to the consistency of that system's 'efforts'.


In the previous WADAwatch post, legal anomalies for the basis of WADA's statistics reporting for the 2008 Lab Report, were aptly highlighted. That Report included, as an explanation, Footnote 2 (FN2), which justified the inclusion of (within the statistics on legal AAFs) past numbers of “AF (Atypical Finding) results” from 2003 through 2007. Their legal basis to do so (ostensibly to offer well-founded comparative results), remains unclear.


WADAwatch noted three major problems incorporated by this act of Reporting: by including 2008 AF findings in a 2009 official Report, from statistics gleaned from its family of 'accredited laboratories', WADA is 'jumping the gun' by reporting these. AFs were not legally in existence until the ISL, WADC and Tech Docs came into force after January 2009, thus compiling statistics one year early doesn't make sense. Second problem, was that WADA evidently 'knew' (as clear reading of the FN2 reveals) some of its previously-proclaimed AAF Sample analyses (and of course, Athletes) were not AAFs, but had been publicly disclosed as such for those first five years (2003 – 2007), while (now) apparently only being 'AFs', which are lower -threshold anomalies: a 'non-positive', a 'report' provoked by 'atypicality of findings' of a Substance, from whichever body fluid(s) were examined, and for which the Lab's Senior Management wishes to find more information or investigate further. Yet inclusion of the AF stats calls into question how this WADA system appears to be working. The third problem is either moral or legal; if WADA is denominating previously reported AAFs as AFs, up to six years after those AAF were announced, it may have a legal obligation to at least publish the numbers, or identify individual cases that were the basis for these abracadabrasque, statistical sleights-of-hand, now in the end of 2009.


As long as WADA has been operating under its Code, it has promoted the concept of 'laboratory standardization'. To assist in analysing (for our own understanding) the statistics provided by WADA, we developed our own Table of WADA lab statistics, which ranks from 1 to 34 the WADA laboratories by their percentage of achieved 'Findings'. We hesitate to rail against the fact that labs under a 'unified' system, are offering percentages that range from nearly five percent, to barely one-half percent. Take a look at WADAwatch's table, first...


(Save a copy! You have permission; it shows better enlarged.)


The underlying issue resulting from this disclosure of WADA laboratories and their annual reporting of 'Findings', remains pre-eminent: what statistic confirms that WADA has produced a 'standardized system'?


It comes as no surprise which laboratory takes First Place, the systemic Gold Star, by holding the highest percentage of AAF (plus AF) findings. The AFLD 'département des analyses', formerly the French LNDD, holds this 'chapeau'. Following closely are Madrid, Ghent and Prague; these four laboratories complete the group that have 'Findings' results higher than four percent (4pc). Five other labs fill the ranks of labs showing between three and four percent. Nine labs share eight places in labs producing 'Findings' within two to three percent: Lausanne and the newest ('Welcome!') WADA lab from New Delhi, show an identical 'Findings' result (2.46pc). En suite, eleven labs float between one and two percent levels; the lowest group itself, between 0.50 and one percent, includes five labs. See our Table's lower left corner to capture the 'Regional Subtotals', and the lower right shows subtotals by 'percentile'. The 'Top Four' labs, are eleven (11pc) percent of the family of labs (4/34ths) yet created 707 AAFs, a number which equals 23pc of the total 2008 AAFs.


More astonishing analysis comes, however, from examining the number of 'Samples' that were run, by percentile of 'Findings'. The four 'most positive' labs ran just over nine percent of the Samples: 26,115 in sum. The five 'least positive' labs ran 39.84pc of the 2008 Samples: 109,406. The labs that report less than two (2pc) positives but more than one percent, ran 74,723 Samples. That equates to 184,129 Samples, or barely over 67pc of the annual number, showing less than two percent positive 'Findings'.


All four of the 'most positive-findings' laboratories are in Europe; none of the 'least positive-findings' are in Europe: two are from the USA (L.A. and Salt Lake City), two from Asia (Tokyo, Beijing) and Ankara (Turkey: we define 'Europe' as does Nicolas Sarkozy). Our presumption is that WADA is currently questioning these disparate statistics at great length, internally and with correspondence with its Signatories, for the stats call into question 'by what sense' the WADA system is functioning, under the title of 'Laboratory Standardization'. As to the 'champion of positives', under the guidance of AFLD and Pierre Bordry, one might think WADA would add this 'achievement' by that Agency to its analysis, pertaining to various counter-charges levied against the AFLD by the UCI response-report of late October.


The phrase 'the Beauty of Science' has a regular place in texts posted by WADAwatch. When the Beauty of Science is surrounded (Suborned? Submerged?) by national politics, international politics, and the World of Sport, it becomes hard to imagine whether a majority of labs' Sample analyses, which show 'low positives', are 'falsely reporting negatives' that should, de facto be 'positives', or whether the opposite is true.


Standardization of laboratories does not mean, ipso facto, that all must have the same IRMS machine; it should mean that whichever IRMS machine, in all WADA labs, when given a Sample (for control/test purposes) in pristine condition, containing an identical concentration of, let's say Testosterone metabolites such as 5-alpha diol and 5-beta diol, ought to be able to identify those metabolites, and their concentrations, at the anticipated level (within scientifically-accepted, statistically acceptable norms: like 0.001 to 0.0001pc).


It does not mean that all labs should operate (within the variations of linguistic or legal necessity) with the identical Laboratory Chain of Custody (LCOC) form (wouldn't that be nice, though?); it does mean, however, that any legal evidence derived be acceptably sufficient, satisfying WADA (or higher) standards for disciplinary hearings. These are necessities, if the system is not to be perceived as a hodge-podge of 'don't touch my Science!' participants.


Is the Los Angeles laboratory, which ran over seven times as many Samples in 2008 (72,394 to 10,194) and found 'AAF Findings', at a rate over seven times less frequently, (0.64pc to 4.98pc) as did the AFLD lab, a much 'better lab'? Could it be 'full of deceit', and 'aiding Athletes to cheat'? Not likely, is this author's opinion.


Is a perfect laboratory at some median level in between these two 'extremes'? Are they 'Standardized'? WADAwatch certainly cannot answer the dilemma this question provokes.


It can only pose the questions... and salute the true fact: cheating Athletes (rather 'doping Athletes': cheaters still exist, for 'handballs' that allow a trip to the World Cup, or 'betting scandals' that are ricocheting across European football (soccer) leagues) are, evidently, barely one percent of the total (okay: Athletes who are elite enough to reside 'within the international anti-doping system', and using Substances (or Methods) detectable under today's testing/analysis environment) at 1.08pc (1.84pc by adding in the 'premature' AF numbers). That's a far cry from the 'mob mentality' which claims 'they're all doped!'. And it does ignore the 'migration' by doping Athletes, into medical substances that have yet to be prohibited.


It is in the best interests of Sport: as a joy, as a business, a spectacle or a career, that this progress is acknowledged as a fact.


WADA should not mask its successes under statistics that seem to imply promotion of the inverse.




To be continued... "And..... action!"

..........@......... WADAWATCH
one hundred percent pure

copyright 2009 Ww


1 comment:

velovortmax said...

Drew,

There were two major issues Landis argued with the CAS that pertains to your 5alphaPdiol reference. The LNDD IRMS did not meet the required accuracy of the published uncertainty +/-.8mil. In fact the Floyd Landis results findings had an error factor of over 20%. The CAS ignored this error claiming that because the 5alphaPdiol results were -6.14mil that the 20% error would not have caused the adverse analytical finding.

Worse than the uncertainty was these documented facts of the Floyd Landis case. The GC/C/IRMS used by LNDD was not designed for testosterone tests. The GC/C/IRMS columns for the "A" samples were non-polarized while the "B" sample results were measured with polarized columns. The AAA claimed that both tests used the same GC/C/IRMS columns although tht lab document package indicates otherwise.

WADA accredited labs also have the problem of repeating the results of their findings. Testing the same sample at two different WADA labs would have large statistical errors rendering the results useless. Even the same blank sample used for calibration at two different WADA labs would result in calibration errors of the GC/C/IRMS.

WADA should conduct a study to examine these problems to acertain whether or not the WADA accredited laboratories are meeting the minimal requirement of "harmony." An independent audit of the WADA laboratories by an independent agency would also resolve many issues and decrease scientific uncertainty in results management.

WADA accredited labs without corrections..."junk in, junk out."

Add to Technorati Favorites