Issue 85
Sampling: Theory and Practice
This is an abridged version of the Keynote Speech which Professor Kim Esbensen delivered at the LBMA Assaying and Refining Conference on the 20 March, 2017.
Although it feels a bit like ‘mission impossible’, this speech aims to provide an overview of the Theory of Sampling (TOS), including insights into the basic principles of representative sampling.
A Crucial Assumption
It is generally assumed that liquid metal is well mixed due to convection and stirring caused by the electromagnetic fields in induction furnaces. This is, of course, a crucial assumption for assaying work. But how true is it? Are all liquid metal pools in all crucibles always completely well mixed? Are impurities always completely uniformly distributed when we tap out the melt? How well can this be verified? By which approach? This is the crucial issue at the very end of the pathway from mine to product. There could easily be a heterogeneity issue involved. Were this so, any endeavour to improve on how to counteract heterogeneity could only benefit professional assaying in refining.
However, there are also many other things to talk about that come before this last analysis stage.
The AMIRA P754 project 2001 paper by Peter Gaylard noted that: “…to avoid the uncertainty related to systematic errors by a proper process concept and appropriate sampling, when inhomogeneity of the sample material can be presumed…” So, heterogeneity is recognised and thus acknowledged even at this latest stage of the journey from mine to analysis. And we also know, if we go to the other end of that pathway, that all the world’s precious metals most certainly were not mined out of the ground as material ready for the crucible. There is a huge and complex process going on from mining, producing a lot of broken ore, which is subjected to a massive series of mass reduction steps (which is nothing but sampling) before the crucible. There are severe order-of-magnitude differences between the mined mass and that of the analytical aliquot of at least one-to-a-million, up to one-to-a-billion [mass per mass]. This is a compound mass reduction process of staggering proportions, and all operations and stages of this process must be representative, lest the possibility of making relevant and reliable decisions based on the ultimate analytical results is impossible.
Indeed, proper sampling is nothing but a series of representative mass reduction, all of which are crucial before analysis. We need to know about this, how to do it properly and how we can work against heterogeneity at all stages in this process. This is the job of sampling competence.
We all need to know some rudimentary basics regarding this matter. So, it would be nice if we could find an international standard that tells us all about how we should conduct the critical sampling and mass reductions. Fortunately, there is now such a standard.
Sampling – ‘how’ instead of ‘how big’
To illustrate, let us use an example from an industry sector other than precious metals– municipal waste. We need to sample this material because it is crucial to know in advance, when this type of waste is incinerated, how much dioxin we are emitting into the atmosphere – dioxin is one of the most potent toxic substances known to man. This is a terribly complex sampling job, but it is crucially important for public health. With (very) small concentrations (impurities, the concentration of which we want to characterise with the utmost accuracy and precision), we are up against heterogeneity of the most difficult kind, no matter the nature of the material. How do you take, say, a 1-kilogram sample of this material and document it is as representative? This is a tough job, but there are perfectly feasible ways to do this.
By the way, the concentration levels for the precursor chemicals that turn into dioxins in an industrial incinerator and are sent out into the atmosphere are identical to the ‘9999’ levels within refining. While both the precious metals and dioxin analytical methods and approaches can deal with the complexities regarding analysis, the really difficult issue is the extremely irregular spatial distribution of the analyte (or the precursors to the analyte). This constitutes the key heterogeneity issue.
When faced with the demand to take a representative sample, the question always uttered is: ‘How big should the primary sample be in order for it to be representative?’ And after that, we have the job of mass-reducing such a primary sample down to whatever is needed for the analytical determination in the end series of steps – all of which must also be representative. It turns out that the issue is rather more complex than merely ‘getting a sample’.
Sample Size and Representativeness
It is not how big the sample should be, but rather how we can make the sample representative. This is the key question. When a sample has been collected in a representative fashion, it has whatever mass is determined by the sampling process and we simply have to accept this.
Scores of standards and guiding documents start out by fixing the size of the sample without this being based on anything empirical such as a pilot heterogeneity assessment. This means that we are just following the tradition that a sample has to be as good as we can get it – so long it is of the ‘required mass’. But going for the sample mass without insight as to the target heterogeneity can never lead to a representative sample. We have to go another way around this issue. On the other hand, once we have licked heterogeneity, sampling gets simple – and we can then worry about the sample mass, etc. But not before.
Thus ‘sample mass’ is not the driver that will lead to representativity; however, a sample that has been collected following the rules of the Theory of Sampling will be representative. And such a sample will then be of whatever mass is needed within these specifications. The job of securing representative primary samples therefore also includes how to make sure that all primary sample masses subsequently can be mass reduced (sub- sampled) effectively and representatively.
Stages of Sampling
Representative sampling is always a multi- stage process – covering the whole pathway from primary sampling of the original lot (commodity, batch, consignment) to analysis of the final test portion, including the secondary, tertiary sub-sampling stages. Luckily, the exact same principles govern all sampling stages.
At all stages, sampling errors abound and our job is to eliminate those that can be eliminated and to reduce all others that are always with us. Thanks to the Theory of Sampling, we can go about this in a very systematic fashion.
DS 3077 – Horizontal Sampling
2013 saw the publication of the world’s first universal standard for representative sampling, called the ‘horizontal standard’. It describes the general principles needed to do representative sampling with regards to all types of material, at all scales and for all purposes. The horizontal nature means that the sampling specifically only focuses on the heterogeneity.
Here follows a sneak preview of the DS 3077 “Representative Sampling - Horizontal Standard.” I had the privilege of chairing the working group responsible for producing this document. It took five years until everybody involved – industry, regulating authorities, scientists – agreed, unanimously, on this 42-page succinct standard. The illustration below manages to capture all the essentials of a proper representative sampling process – multiple stages, all with the exact same set of sampling errors, which the sampler has to suppress/eliminate, while also depicting the four Sampling Unit Operations available for this task.
Theory of Sampling (TOS)
Overview
There are 10 general elements in the Theory of Sampling and, remarkably, this is all we need to tackle any sampling objective, of any material, at any scale, for any purpose.
These elements are grouped into six Governing Principles (GP) and four Sampling Unit Operations (SUO). For example, the Principle of Sampling Simplicity (PSS) states that there is always a primary sampling and, after that, we ‘only’ have to perform a series of representative mass reductions until we have produced the aliquot mass needed for analysis. The entirety of this latter task is covered by Sampling Unit Operation no. 10. From a systematic point of view, these are a series of ‘similar’ sampling operations but take place at smaller and smaller scales.
We should always be mindful that no analytical result is better than bracketed by the accumulating uncertainty from all these steps. Our job is to make each sampling operation representative, wherever in the lot-to-analysis pathway it takes place, i.e. no matter at what scale. Luckily, there is no interaction between any of the stages, so we can decompose all compound problems into a series of individual sampling operations governed by the same principles, using the same sampling unit operations, etc.
Heterogeneity
The arch enemy of all our sampling efforts is heterogeneity.
The above illustration is obviously a cartoon, but it shows the essence of what we are up against. The overall, average concentration of the analyte (black spheres) is 10% and the white spheres are the matrix, the filler, the gangue or whatever you want to call it. Remember, the analyte is often ‘impurities’. The key feature is its irregular spatial distribution - hetrogeneity. Let us say that we take just one sample (a ‘grab sample’). It might be the one to the right. This particular sample would carry 75% of the analyte – a pretty high estimate of the total average concentration in this lot. We can clearly see something is wrong here. This is a cause for concern. Let us take another sample, but this one turns out to carry 25% (the topmost sample), which forces us to a third sample, which perplexingly turns out to carry 0% of the analyte. Obviously, we are in deep trouble – the analytical values are all over the place. In such a situation, fingers are usually pointed at the laboratory, but completely without reason. What we experience here has absolutely nothing to do with the competence of the analytical laboratory. We are simply facing what is known as the Fundamental Sampling Error (FSE), which is a sampling artefact that is always with us when dealing with low analyte concentrations. Such single samples as illustrated are simply too small to do a reasonable job; hence, the perhaps at first understandable but still futile question: ‘How big… to be representative?’ It is clear that a sample would have to be of the order of a very significant proportion of the whole lot before it would stand any chance of being close to being useful for estimating the overall concentration (1/3 to 1/2 of the total lot mass). This is clearly not the way to go.
Solutions
But we can in fact easily sample also in the case of adverse heterogeneity – through composite sampling. This is also the main door-opener to representative sampling at the primary stage. The illustration below is generic. A sample composed of, for example, the seven individual increments shown (which make up the exact same mass as the singular grab sample also depicted) is able to ‘cover the heterogeneity’ of the lot in a vastly improved fashion, compared to the grab sample. The ‘free parameter’ of all composite sampling procedures is ‘Q’, the number of increments one is willing to deploy to counteract the heterogeneity encountered. Should the sampler not be satisfied with a ‘too cautious’ Q (in the present illustration, Q = 7), the general rule for how to increase the fit-for-purpose representativity of any composite sampling process is simply to increase the number of increments, Q (see DS 3077 (2013) and other references below).
There are two aspects of heterogeneity: compositional and distributional, or spatial heterogeneity, and the latter is the real enemy. But a structured composite sampling procedure, patterned on the problem at hand will solve this problem. We only need to know how.
These few examples demonstrate the imperative of a pilot heterogeneity characterisation of any material for which we need a fully documented representative sampling procedure. Standard ‘samplingplans’, with pre-set ‘sample mass’ stipulations, are the very anathema to proper sampling.
Sampling in practice
This is an example of what I have seen within many industry sectors. It is not always the case that you can see material heterogeneity with your own eyes, and this is indeed making the world a little more challenging and complex. But there is no problem even in this case.
There is no such thing as a homogenous material in science, technology and industry. The materials that we are dealing with are always heterogeneous to some degree. It is just a matter of to what degree. A logical and rational way to proceed is simply to treat all materials in need of sampling as if they were significantly heterogeneous. This is indeed also the simplest operational modus. Following the Theory of Sampling’s principles, the professional sampler does not need to switch the type of sampling operation used when addressing a different material. There is no change of procedures when heterogeneity may differ – only Q changes. This simple, unified approach Sampling Unit Operation no.7 empowers us to tackle all sampling issues, regardless of their lot size, form or the nature of the material, by only addressing their specific heterogeneity.
Sampling Errors
I want to introduce you to Pierre Gy, a giant in science who very sadly died in November2015, and who single-handedly developed the Theory of Sampling from 1950 to 1975. He wrote nine books and gave more than 250 speeches on the subject. He carried out a tremendous amount of R&D, but never worked at a university. He was a consultant nearly all his life – a remarkable life.
Pierre Gy’s major breakthrough was to identify no less than seven sampling errors that cover everything that can go wrong with sampling. He then meticulously worked out how to avoid these errors and their adverse impact on the uncertainty as much as possible. It was a monumental job. Along the way, he worked for and was awarded two PhDs – one in mineral processing and one in statistics – in order to solve all the complex problems identified. There are only about 10 to 15 professionals in the world who have read his work in its entirety. Although complex, TOS can also be made more easily accessible however: These seven sampling errors originate from only three sources – the material, the sampling equipment and the sampling process – depending on whether the lot is stationary or moving when sampling takes place.
Pierre Gy’s oeuvre is awe-inspiring; he is honoured in a special issue of the TOS Forum (2016).
Sampling Unit Operations
My own humble contribution to the Theory of Sampling has been to put TOS on an axiomatic footing and to develop it into the new standard now available to all of us. The whole theory can in fact be summarised as the six Governing Principles and four Sampling Unit Operations as mentioned above. The Sampling Unit Operations (SUO) are the only instruments (the only concrete procedures) that we have at our disposition when we are called upon to solve sampling problems: i) composite sampling; ii) comminution; iii) mixing/blending; and iv) mass reduction (but not just any mass reduction – only representative mass reduction will do).
The above 10 elements are all we have at our disposal as professional samplers: four unit operations that we can apply in order to solve all practical problems – and guided by only six Governing Principles. This is not rocket science, but it does need structured, rational thinking. We are all familiar with crushing, mixing, blending and sub-sampling of course– but exactly how to deploy these agents when facing a specific heterogeneous material needs the full complement of GPs to succeed
Measurement Uncertainty (MU)
We all know of and work with measurement uncertainty – a characteristic of analytical methods. The fishbone diagram (in figure 9) shows how the elements of analytical methods can be structured. We can always get everything under control for any analytical method following the principles of Measurement Uncertainty (MU), i.e. we can always get a valid estimate for the total analytical Measurement Uncertainty (MU) – which we can call MU analysis.
There is one part of the fishbone diagram that traditionally is not considered, however, and that is the sampling errors, which are simply left out. It is of course not a good idea to leave out these additional uncertainty components as they most emphatically always contribute to the effective total Measurement Uncertainty, MU sampling + analysis. This is a significant indeed often fatal problem if not properly acknowledged and rectified.
In nearly every case that I know of and in others I am sure, the sampling errors are typically many orders-of-magnitude larger than the total analytical error. In fact, it is fair to say that the Theory of Sampling constitutes the missing link in MU. The TOS deals with all the sampling issues involved and delivers the best possible representative analytical aliquot upon which to carry out the analytical determination. There are therefore always these two elements to the total measurement error, which is mandated to include the sampling errors. A recent publication deals in full detail with these issues: TOS vs. MU, Esbensen & Wagner (2014).
The above illustration is a snapshot of how this augmented systematic ties in with the analytical measurement fishbone schema. There are three types of errors ‘on the sampling side of the street’. One is only involved when we are sampling moving targets and the remaining ones are: the incorrect sampling errors and the correct sampling errors. The first job of any sampling solution is to get rid of the incorrect sampling errors. They produce a detrimental sampling bias.
Analytical Processes vs. Sampling Process – a monumental difference
We all know the difference between accuracy and precision. We need both of these to qualify an analytical process, for example. In the illustration below (at bottom left) is a ‘perfect’ analytical process. It is unbiased and precise. We can also have a situation (illustrated bottom right) where we still have precision but there is a bias. Under the statistical assumption that this bias is constant for the analytical process investigated, it is possible to make a bias correction by subtracting the estimated bias magnitude. This is done all the time in any professional analytical laboratory.
Let us now consider doing a replication of the whole sampling process (including the analytical determination), say 10 times, to find out where we are. If the 10 analytical results distribute themselves as shown in the upper right illustration (grey area), this is most certainly not amenable to any statistical correction. We are not really sure what is going on here. This is very surprising. We therefore try doing such a replication experiment again (yellow area) and maybe we try to get more insight from trying a third time (red area). The perplexing result is shown in the upper right illustration.
The conclusion reached by the Theory of Sampling is radical, but it also opens a door into how we can perform and document representative sampling in all events. The completely new issue is that the sampling bias is inconstant: it changes its magnitude every time we try to estimate it. This is because, for each set of replicate samples, we are taking out a little bit of the material that is significantly heterogeneous and we are definitely going to take out 10 different smaller bits the second time we try it, so we are getting our hands on different parts of significantly heterogeneous material even when we ‘repeat’ the sampling process in a completely identical fashion. This often causes a lot of problems for first-time observers. However, perplexing this may seem, this crucial reality remains. The analytical results will never give rise to the ‘same distribution’ of analytical results (red, yellow, grey areas in the illustration above). Unfortunately – and here comes the crunch – this feature cannot be modelled by classical statistics, by a normal distribution; nor by any more advanced distribution.
The issue is that a significantly heterogeneous lot, because of its distributional heterogeneity, cannot be considered as a simple collection of analytical results that we can throw classical statistics at and expect a ‘correction’ solution from as we do with analytical uncertainties. Taking into account the heterogeneity effects, we have to work in a different way.
The Theory of Sampling’s conclusion to this troubling issue is simply to demand that the sampling process must be designed so as to eliminate the incorrect sampling errors. This is the most important demand to representative sampling – eliminating all incorrect sampling errors, the ‘hidden’ culprits that produce the fatal inconstant sampling bias which we cannot under any circumstance control or correct for.
Theory of Sampling – the necessary and sufficient framework for practical sampling
The Theory of Sampling treats all of the issues briefly introduced above, and much more, from a strict systematic point of view. There are a few other elements to it besides what I have managed to illustrate here, but these will not alter the overview already provided. The TOS is the definitive framework for all sampling-related matters, be these procedures, equipment or performance assessment and validation of existing sampling systems and installations (auditing). At a PhD level, it takes two to three days in the auditorium to get into this curriculum (but there is a severe reading requirement), and there are many dedicated courses available for companies, industries and regulating authorities, and individuals can indulge in any level of self-studies (see references below). Getting to know all of what is needed is thus not an impossible task.
The TOS is a systematic way of thinking which has, as its main elements, material heterogeneity and how to counteract this when sampling. It is all about the sampling process.
We should always be able to produce the most representative primary samples from any target lot and to mass reduce these competently in order to end up with the representative aliquot for analysis. Applied properly, the TOS allows us to forward only one aliquot to the laboratory for analytical determination. Only one is needed because the entire from-lot-to-analysis process honours the TOS’s principles for representativity.
Professor Kim H Esbensen, Danish Geological Survey and Aalborg University. Kim H Esbensen, Ph D, Dr (hon), has been research professor in Geoscience Data Analysisand Sampling at GEUS, the National Geological Surveys of Denmark and Greenland (2010- 2015), chemometrics/sampling professor at Aalborg University, Denmark (2001-2015), professor (Process Analytical Technologies) at Telemark Institute of Technology, Norway (1990-2000 and 2010-2015) and professeur associé, Université du Québec à Chicoutimi (2013-2016). He phased out a more than 30-year academic career for a quest as an independent consultant from 2016: www. kheconsult.com - but as he could not terminate his love for teaching completely, is also active as an international guest professor here and there.
Kim, a geologist/geochemist/data analyst of training, has been working 20+ years in the forefront of chemometrics, but since 2000 has devoted most of his scientific and R&D to the theme of representative sampling of heterogeneous materials, processes and systems (Theory of Sampling, TOS), PAT (Process Analytical Technology) and chemometrics. He is a member of five scientific societies and has published over 250 peer-reviewed papers and is the author of a widely used textbook in Multivariate Data Analysis (33,000 copies). He was chairman of the taskforce responsible for writing the world’s first horizontal (matrix- independent) sampling standard (2013 and editor of the magazine TOS forum.
Kim is fond of the right breed of friends and dogs, swinging jazz, fine cuisine, contemporary art and classical music. He has been collecting science fiction novels for more decades than what he is comfortable contemplating, still, as ever... it’s all in the future.
Kim can be contacted at khe.consult@gmail.com