Statistical Methods Expert Working Group

Aims and Intentions 

The Expert Working Group advises the AMC and the chemical community on the development and proper performance of analytical quality procedures, including uncertainty evaluation, calibration methods and limits of detection, method validation, quality assurance and internal quality control, and experimental design and optimisation. Statistics plays a key role in all these areas of the analytical sciences, and the Expert Working Group promotes and facilitates the use of the most suitable statistical methods within the Analytical Community by its publications, software and databases.

These aims are addressed by:

  • providing information about good basic statistical practice.
  • investigating the benefits and limitations of both traditional and more modern statistical methods in analytical science. 
  • facilitating the use of newer statistical techniques by providing the necessary information in a readily usable form.  This is achieved through the publication of a range of informative Technical Briefs and associated software.
  • applying statistical principles to undecided problems relating to quality in analytical data.  
  • promoting best practice in method validation for both quantitative and qualitative methods of analysis, including the design of validation experiments and analysis of validation data.
  • providing and, where appropriate, contributing to the development of technical information on experimental procedures for method validation.
  • ensuring wide applicability of guidance from the Expert Working Group by collaboration with appropriate national and international organisations, including the UK Working Groups on Proficiency Testing and Reference Materials, BSI, Eurachem, and IUPAC.
  • reviewing, commenting on and developing guidance for method validation and quality assurance on behalf of the AMC.


The Sub-Committee is made up of representatives from consultants, government, industry and public analysts. Membership of the Statistical Methods Sub-Committee, as of 1 July 2017, is:

  • Prof J N Miller - Chairman
  • Dr David Bullock - UK NEQAS 
  • Dr Andy Damant - AMC Technical Briefs Editor
  • Mr Mark Earll - Syngenta 
  • Dr Steve Ellison - LGC
  • Prof Tom Fearn  - UCL 
  • Dr Lee Gonzalez - Charles Sturt University
  • Mr Mike Healy - Environment Agency 
  • Mr Roy Macarthur  - Fera, York 
  • Mr Nigel Payne - Association of Public Analysts
  • Prof Mike Ramsey - University of Sussex
  • Dr Peter Rostron - University of Sussex
  • Prof Mike Thompson - Birkbeck College, University of London 
  • Mr Alex Williams CB - Consultant (corresponding member) 
  • Dr Roger Wood OBE - Secretary


The following has been published under the auspices of the Expert Working Group:


The role of proficiency testing in method validation
Accreditation and Quality Assurance, 2010, 15(2), 73

Technical Briefs of the Analytical Method Committee

87. The correlation between regression coefficients: combined significance testing for calibration and quantitation of bias (Anal. Methods, 2019, 11, 1845)

82. Editorial: Should analytical chemists (and their customers) trust the normal distribution? (Anal. Methods, 2017, 9, 5843)

82. Are my data normal? (Anal. Methods, 2017, 9, 5847)

74. z-Scores and other scores in chemical proficiency testing—their meanings, and some common misconceptions (Anal. Methods, 2016, 8, 5553)

70: An analyst's guide to precision (Anal. Methods, 2015, 7, 8508)

69. Using the Grubbs and Cochran tests to identify Outliers (Anal. Methods, 2015, 7, 7948)

68: Fitness for purpose: the key feature in analytical proficiency testing (Anal. Methods, 2015, 7, 7404)

61. The “Phase-of-the-Moon” paradox in uncertainty Estimation (Anal. Methods, 2014, 6,3201)

57. An introduction to non-parametric statistics (Anal. Methods, 2013, 5, 5373)

56. What causes most errors in chemical analysis? (Anal. Methods, 2013, 5, 2914)
55. Experimental design and optimisation (4): Plackett–Burman designs (Anal. Methods, 2013, 5, 1901)

54. Checking the quality of contracted-out analysis (Anal. Methods, 2012, 4, 3521)

53. Dark uncertainty (Anal. Methods, 2012, 4, 2609)

52. Bayesian statistics in action (Anal. Methods, 2012, 4, 2213)

50. Robust regression: An introduction (Anal. Methods, 2012, 4, 893)

49. Sporadic Blunders (March 2011; revised May 2011)

47. How continuous should 'continuous monitoring' be? (March 2010)

46. Internal quality control in routine analysis (February 2010)

39. Rogues and Suspects: How to Tackle Outliers (March 2009)

38. Significance, importance and power (March 2009)

37. Standard additions: myth and reality (March 2009)

36. Experimental Design and Optimisation (3): Some Fractional Factorial Designs (February 2009)

32. Optimising your uncertainty - a case study (July 2008)

30. The standard deviation of the sum of several variables (April 2008)

28. Using analytical instruments and systems in a regulated environment (September 2007)

27. Why are we weighting? (June 2007)

26A. Measurement uncertainty and confidence intervals near natural limits (Former AMC Recommendation re-issued as a Technical Brief in September 2008)

26. Experimental design and optimisation (2): handling uncontrolled factors (December 2006)

25. How good were analysts in the good old days before instrumentation? (October 2006)

24. Experimental design and optimisation (1): an introduction to some basic concepts (June 2006)

23. Mixture models for describing multimodal data (March 2006)

22. Uncertainties in concentrations estimated from calibration experiments (March 2006)

21A. The estimation and use of recovery factors )Former AMC Recommendation re-issued as a Technical Brief in September 2008)

19a. General and specific fitness functions for proficiency tests and other purposes - clarifying an old idea (Former AMC Recommendation re-issued as a Technical Brief in September 2008)

19. Terminology - the key to understanding analytical science. Part 21: Sampling and sample preparation (March 2005)

18A. What is proficiency testing? A guide for end-users of chemical data (Former Background Paper reissued as Technical Brief 18A, July 2008)

18. GMO Proficiency testing: Interpreting z-scores derived from log-transformed data (December 2004)

17A. Test for 'sufficient homogeneity' in a reference material (Former AMC Recommendation re-issued as a Technical Brief in September 2008)

17. The amazing Horwitz function (July 2004)

16. Proficiency testing: assessing z-scores in the longer term (April 2004 revised April 2007)

15. Is my uncertainty estimate realistic? (December 2003)

14. A glimpse into Bayesian statistics (October 2003)

13. Terminology - the key to understanding analytical science: Part 1: Accuracy, precision and uncertainty (September 2003)

12. The J-chart: a simple plot that combines the capabilities of Shewhart and cusum charts, for use in analytical quality control (March 2003)

11. Understanding and acting on scores obtained in proficiency testing schemes 
No 11, December 2002

10. Fitting a linear functional relationship to data with error on both variables (March 2002)

9. A simple fitness-for-purpose control chart based on duplicate results obtained from routine test materials (February 2002)

8. The Bootstrap: A Simple Approach to Estimating Standard Errors and Confidence Intervals when Theory Fails (August 2001)

6. Robust statistics: a method of coping with outliers (April 2001)

5. What should be done with results below the detection limit? (April 2001)

4. Representing data distributions with kernel density estimates (Revised March 2006)

3. Is my calibration linear? (Revised December 2005)

2. The zL-score--combining your proficiency test results with your own fitness for purpose criterion (Revised December 2005)

AMC Technical Briefs

The new AMC Technical Briefs, will bring to every member of the Analytical Division up to date news of current technical issues.