ISO 13528:2015 pdf download.Statistical methods for use in proficiency testing by interlaboratory comparison
1 Scope
This International Standard provides detailed descriptions of statistical methods for proficiency testing providers to use to design proficiency testing schemes and to analyse the data obtained from those schemes. This Standard provides recommendations on the interpretation of proficiency testing data by participants in such schemes and by accreditation bodies. The procedures in this Standard can be applied to demonstrate that the measurement results obtained by laboratories, inspection bodies, and individuals meet specified criteria for acceptable performance. This Standard is applicable to proficiency testing where the results reported are either quantitative measurements or qualitative observations on test items. NOTE The procedures in this Standard may also be applicable to the assessment of expert opinion where the opinions or judgments are reported in a form which may be compared objectively with an independent reference value or a consensus statistic. For example, when classifying proficiency test items into known categories by inspection – or in determining by inspection whether proficiency test items arise, or do not arise, from the same original source – and the classification results are compared objectively, the provisions of this Standard that relate to nominal (qualitative) properties may apply.
2 Normative references
The following documents, in whole or in part, are normatively referenced in this document and are indispensable for its application. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.
5.1 Introduction to the statistical design of proficiency testing schemes
Proficiency testing is concerned with the assessment of participant performance and as such does not specifically address bias or precision (although these can be assessed with specific designs). The performance of the participants is assessed through the statistical evaluation of their results following the measurements or interpretations they make on the proficiency test items. Performance is often expressed in the form of performance scores which allow consistent interpretation across a range of measurands and can allow results for different measurands to be compared on an equal basis. Performance scores are typically derived by comparing the difference between a reported participant result and an assigned value with an allowable deviation or with an estimate of the measurement uncertainty of the difference. Examination of the performance scores over multiple rounds of a proficiency testing scheme can provide information on whether individual laboratories show evidence of consistent systematic effects (”bias”) or poor long term precision. The following Sections 5-10 give guidance on the design of quantitative proficiency testing schemes and on the statistical treatment of results, including the calculation and interpretation of various performance scores. Considerations for qualitative proficiency testing schemes (including ordinal schemes) are given in Section 11.