ASTM D7578 Standard Guide for Calibration Requirements for Elemental Analysis of Petroleum Products and Lubricants
6. Basic Considerations
6.1 All apparatus and instruments used in a laboratory require some kind of calibration or verification before an instrument is used for producing reliable data. A perfect analysis needs a perfect calibration as a first step and perfect quality control as perhaps the last step in the sequence of analytical events. Often this cycle is depicted as:
Calibration→Sample Analysis→QC Analysis→Calibration→→
6.1.1 Some test methods may additionally require a step of verification of calibration using a check standard.
6.2 The overall program of calibration of equipment should be designed and operated so as to ensure that the measurements made in the testing laboratories are traceable (where the concept is applicable) to national standards of measurement, and where available, to international standards of measurement specified by such bodies.
6.2.1 Where the concept of traceability to national or international standards of measurement is not applicable, the testing laboratory should provide satisfactory evidence of correlation or accuracy of test results (for example, by participating in a suitable program of interlaboratory comparison), or by primary and interference-free classical chemistry techniques such as gravimetry or titrimetry.
6.3 Different test methods require different calibration intervals. Thus, a decision about appropriate calibration frequency shall be made on a case by case basis. However, it goes without saying that the calibration practices are a must for all analytical testing and shall be thoroughly documented both regarding the plan and the factual evidence that it is being followed. There is a tendency among many laboratories to do the bare minimum calibrations similar to their approach towards quality control requirements. This is not the way to achieve superior performance. Moreover, if an instrument is found to be out-of-calibration, and the situation cannot be immediately addressed, then the instrument shall be taken out of operation and tagged as such until the situation is corrected. Under no circumstances can data from that instrument be reported to the customers.
6.4 The performance of apparatus and equipment used in the laboratory but not calibrated in that laboratory (that is, pre-calibrated, vendor supplied) should be verified by using a documented, technically valid procedure at periodic intervals.
6.5 Calibration Standards - Calibration standards appropriate for the method and characterized with the accuracy demanded by the analysis to be performed, shall be utilized during analysis. Quantitative calibration standards should be prepared from constituents of known purity. Use should be made of primary calibration standards or certified reference materials specified or allowed in the test method. A wide variety of such standards are available from commercial sources, NIST, etc. Many laboratories have capabilities of preparing reliable in-house standards. Calibration standards identical to the samples being analyzed would be ideal, but failing that, at least some type of standards shall be used to validate the analytical sequence. In physical measurements this is usually achievable, but it is often difficult or sometimes almost impossible in chemical measurements. Even the effects of small deviations from matrix match and analyte concentration level may need to be considered and evaluated on the basis of theoretical or experimental evidence, or both. Sometimes the use of standard additions technique to calibrate the measurement system is a possibility. But because an artificially added analyte may not necessarily respond in the same manner as a naturally occurring analyte, this approach may not be always valid, particularly in molecular speciation work.
NOTE 1 - See Practice D4307 for recommendations in preparing liquid blends for use as analytical standards.
6.5.1 If a laboratory wants to prepare in-house calibration standards, the appropriate values for reference materials should be produced following the certification protocol used by NIST or other standards issuing bodies, and should be traceable to national or international standard reference materials, if required or appropriate.
6.5.1.1 NIST uses seven models for value assignment of reference materials for chemical measurements: NIST certified values are derived from certification at NIST using a single primary method with confirmation by other method(s) or using two independent critically-evaluated methods, or using one method at NIST and different methods by outside collaborating laboratories; NIST reference values are derived from last of the two models mentioned, as well as values based on measurements by two or more laboratories using different methods in collaboration with NIST, or based on a method specific protocol, or NIST measurements using a single method or measurement by an outside collaborating laboratory using a single method, or based on selected data from interlaboratory studies. The last four means are used also for assigning NIST information values. See NIST Special Publication 260-136 for further details on this subject.
6.5.2 In addition to the oil-soluble organometallic compounds used for the calibration of instruments such as AAS, ICP-AES, or XRF, single-element or multi-element calibration standards may also be prepared from materials similar to the samples being analyzed, provided the calibration standards to be used have previously been characterized by independent, primary (for example, gravimetric or volumetric) analytical techniques to establish the elemental concentration at mass percent levels.
6.5.3 Reference Materials (RM) - These can be classified as primary or secondary.
6.5.3.1 The primary RMs are well-characterized, stable, homogenous materials produced in quantity, and with one or more physical or chemical property experimentally determined, within the stated measurement uncertainties. These are certified by a recognized standardization laboratory using the most accurate and reliable measurement techniques.
6.5.3.2 The secondary RMs are working standards or QC standards and may have undergone less rigorous evaluation for day-to-day use in the laboratory.
6.5.3.3 The two most important considerations when preparing reference materials are its homogeneity and stability. Considerable time and money would be wasted if analytical certification measurements were done on reference materials, which were later found to be inhomogeneous with respect to properties of interest. Hence, several randomly selected representative aliquots should be analyzed first to ensure homogeneity.
6.5.3.4 Similarly, if a reference material is found to be unstable over the period of its use, it would be of little benefit to standardization community. However, it is not very practical to check the stability over an inordinately extended period of time before issuing the reference material for general use. Hence, testing the stability of the material continues as part of ongoing quality control of reference materials.
6.5.3.5 Whether stock or working standards, they need to be stored in clean containers and out of direct sunlight and preferably in amber glass bottles to safeguard against physical degradation and in contamination-free environment. One way of checking for degradation is to measure the response of an aliquot of the standard by the same instrument under identical instrumental conditions over a period of time and monitor it for changes, if any. A list of suggested precautions to be taken in storage of reference materials is given in Table 1.
6.5.3.6 Shaking the bottle containing the standard is recommended before an aliquot is taken out of the bottle to ensure the uniformity of the blends. If stirring is necessary, a PTFE-coated (polytetrafluoroethylene) magnetic stirrer is advisable. If the test method specifically prohibits shaking then this instruction may not apply.
6.5.4 Materials available from ASTM Proficiency Testing Programs may be used provided the data does show normal Gaussian distribution of results and normal frequency distribution. The consensus value is most likely the value closest to the true value of this material; however, the uncertainty attached to this mean value is dependent on the precision and the total number of the participating laboratories. The expanded uncertainty of the consensus value is inversely proportional to the square root of the number of laboratories (L) used to establish the consensus value. Regardless of the variance of the results, for a large enough number of laboratories (L), the uncertainty of the consensus value will, for some value of L, be suitable for calibration purposes. It has been observed that in some cases the variance on the mean value of such proficiency testing program is large (that is, larger than reproducibility of the test method used), making such materials not very useful for calibration work. They are, however, suited for use as quality control materials.
6.5.5 Analysis of CRMs - Since the CRMs will be potentially used for calibration and quality control of a large number of instruments and measurements, the values assigned to them need to be "accurate" values, that is, they should be within the overall uncertainty of "true" values. Hence, the methods used in certifying the values shall have a valid and well-described theoretical foundation, shall have negligible systematic errors and a high level of precision, and shall give "true" values with high reliability. These primary methods require skilled and experienced personnel, are time consuming and comparatively expensive to perform, and perhaps uneconomical for routine field use. Three types of such methods may be used for certifying the CRMs.
6.5.5.1 Measurement by a method of known and demonstrated accuracy performed by two or more analysts independently. Frequently an accurately characterized backup method is used to provide assurance of correctness of data.
6.5.5.2 Measurement by two or more independent and reliable methods whose estimated inaccuracies are small, relative to required accuracy for certification. The basic principles of two techniques shall be entirely different, for example, copper determination by electrogravimetry and titrimetry is acceptable, but not by AAS and ICP-AES, since both latter methods are based on atomic spectroscopy. The likelihood of two independent methods being biased by the same amount in the same direction is small. When the results by two methods agree, there is a good possibility that the results are accurate; three methods would almost guarantee it.
6.5.5.3 Measurement via a worldwide network of laboratories, using both methods of proven accuracy and using existing certified reference materials as controls. It has to be recognized, however, that the mean value of results from a large number of laboratories may not necessarily represent an accurate value when the repeatability and reproducibility are large (that is, greater than those quoted in the test method used).
6.6 Sometimes because of necessity, some values for reference materials are quoted based on only one technique that does not qualify it as a referee method for that analysis. Such values are usually labeled as "for information only". These can be upgraded later to certified values when subsequently additional techniques or laboratories produce reliable confirmatory data.
6.7 Calibration Frequency - The calibration schedules will vary with the instrument type, some needing calibration before each set of analysis (for example, AAS), others requiring calibration at less frequent periods (for example XRF). An important aspect of calibration is the decision on calibration intervals, that is, the maximum period between successive recalibrations. Two basic and opposing considerations are involved: the risk of being out of tolerance at any time of use, and the cost in time and effort. The former should be the major concern because of the dilemma of what to do with the data obtained during the interval between the last known in and the first known out of calibration. However, an overly conservative approach could be prohibitively expensive. A realistic schedule should reduce the risk of the former without undue cost and disruption to work schedules. The factors that need to be considered in a realistic schedule include:
6.7.1 Accuracy requirement for the measured data.
6.7.2 Level of risk involved.
6.7.3 Experience of the laboratory in use of the equipment or methodology.
6.7.4 Experience of the measurement community.
6.7.5 Manufacturer's recommendations.
6.7.6 External requirements for acceptability of data.
6.7.7 Cost of calibration and quality control.
6.8 An initial choice of calibration intervals may be made on the basis of previous knowledge or intuition. Based on the experience gained during its use, the intervals could be expanded if the methodology is always within tolerance at each recalibration, or it should be decreased if significant out-of-tolerance is observed. Control charts may be used to monitor the change of measured value of a stable test item correlated with the need to recalibrate. Many laboratories use a posted schedule of calibration which is followed by the analysts. This is fine, so long as intelligent judgment is used in adhering to this schedule. If the quality control sample or routine sample data produced by an instrument appears to be of doubtful quality, the first thing to check is the quality control and calibration of the instrument, irrespective of what the calibration schedule is.
6.8.1 There are some tests (for example, ICP-AES) where calibration is an integral part of the analysis and ASTM test methods explicitly state the needed frequency. In all such cases, this requirement shall be met.
6.9 Calibration versus Verification - Although often the two words verification and calibration are synonymously used, they indeed have different connotations. Verification pertains to checking that the instrument or a system is in a condition fit to use, while calibration involves standardization as in a measuring instrument by determining the deviation from a reference standard so as to ascertain the proper correction factors. For example, in the D5800 Noack evaporation loss test method, the instrument is verified with a CEC check standard that it gives the correct value. In Test Method D892 (foam test method), the air diffusers are verified for their maximum pore diameter and permeability against specifications before proceeding with the test. In the ash test (Test Method D482) and sash test (Test Method D874), no specific verification or calibration is done other than appropriate thermometers for monitoring the temperature in the oven or furnace, and balance calibration. On the other hand in D4951 and D5185 ICP-AES methods for metals, the instrument is calibrated over several concentration ranges to check that the linearity is acceptable and other additional checks are also required.
6.9.1 Many ASTM test methods either do not specify the calibration steps or do not give the frequency of calibration. In such cases, the incidence and the frequency is determined from prior laboratory experience, or industry practice, or both.
6.9.2 ASTM Standard Practice for Quality System in Petroleum Products and Lubricants Testing Laboratories D6792-07 states that Procedures shall be established to ensure that measuring and testing equipment is calibrated, maintained properly, and is in statistical control. Items to consider when creating these procedures include:
6.9.2.1 Records of Calibration and Maintenance.
6.9.2.2 Calibration and Maintenance Schedule.
6.9.2.3 Traceability to National or International Standards.
6.9.2.4 Requirements of the Test Method or Procedure.
6.9.2.5 Customer Requirements.
6.9.2.6 Corrective Actions.
6.10 Calibration Documentation - All calibration records should be documented either in the instrument computer software or in manually prepared laboratory notebooks. This should include information such as date of last and next calibrations, the person who performed the calibration, method or procedure used for calibration, the material used for calibration, the values obtained during calibration, and the nature and traceability (if applicable) of the calibration standards. Records may be maintained electronically.
6.10.1 For instruments that require calibration, calibration and maintenance records may be combined. See Table 2.
6.11 Types of Calibrations - The laboratory apparatus and analytical instruments used in elemental analysis can be (arguably) divided into three categories as Class I, II, and III based on the extent of calibration needed in each case from minimal or no calibration to extensive.
6.11.1 Class I - Apparatus include miscellaneous, unsophisticated equipment which may need no calibration or minimal verification such as motor speed or temperature maintained. Perhaps stirrers or some types of thermometers will fall in this category. Generally, these apparatus do not produce actual analytical data.
6.11.2 Class II - Apparatus includes equipment that should be maintained, and/or possibly calibrated on a routine basis and may have minimal verification requirements. This might include balances, temperature controllers, gas flow meters, etc, unless the data from these instruments is the final result of the analysis. The data from Class II instruments usually is not sent to the customers.
6.11.3 Class III - Instruments include sophisticated instrumentation/equipment that should require scheduled full verification, or calibration, or both, as given in the standard ASTM protocols before the instrument is used for the sample analysis. These may be done either by the analysts or outside contractors or Original Equipment Manufacturers (OEM). For all of these instruments there are ASTM standard test methods available which should be followed in operation. The data produced from these instruments could be provided to the customers. Some of the specifics of calibration routines for Classes I, II, and III follow:
6.12 The three most commonly used accessories in most analytical testing are temperature measuring devices, time measuring devices, and balances.
6.12.1 Temperature Measuring Devices - These include liquid-in-glass thermometers and electronic digital thermometers and thermocouple probes. With the increasing concern about mercury toxicity, thermometers are being replaced in the laboratories as well as in ASTM test methods with electronic devices. These calibrated thermometers should have tags affixed to them indicating the date of current and next calibration, correction factor, if any, and the name of the person calibrating them.
6.12.1.1 The critical TMDs are purchased from vendors with a certificate verifying that they are calibrated using ASTM standard methods and are traceable to NIST standards. One certified set of thermometers should be used exclusively for verification of other TMDs. Annually, the certified set of thermometers is verified using the NIST traceable standards. Over and above the annual recalibration of TMDs, some ASTM test methods specifically require additional calibration of thermometers as a part of the analytical procedure. Individual ASTM test methods should be consulted for details of required recalibrations. However, many ASTM test methods do not specify the frequency of calibration.
6.12.1.2 The thermometers which are not used in analytical testing are considered non-critical and generally are not calibrated. ASTM 120C and 121C certified liquid-in-glass thermometers used in kinematic viscosity (Test Method D445) baths are typically controlled at working temperatures of 40 and 100°C and are verified as described in Test Method D445. This service is conducted by an outside contractor. An ice point calibration can be done as an interim step between full calibrations. This procedure is described in Test Method D445, Annex A2.
6.12.1.3 In addition to general use of thermometers, the following specific test methods require monitoring using calibrated TMDs: Test Method D482 (ash), Test Method D874 (sulfated ash), D1548 (vanadium in fuel oil), and D1552 (sulfur by high temperature combustion).
6.12.2 Timers - Both stopwatches and electronic time measuring devices, if necessary, can be calibrated using the time signals as broadcast by NIST and received by calling the NIST phone number. The procedure is given in Test Method D445, Annex A3. The verification data should be recorded. Any timer not meeting the verification standard should be discarded. No other maintenance is expected on these timers.
6.12.3 Balances - The procedure for calibration of laboratory electronic mass balances is described in E319 and E898. Balances are usually calibrated once a year using NIST traceable standard weights. A record of such calibrations should be maintained.
6.12.4 A suggested calibration frequency for generic equipment used in elemental analysis is given in Table 3. In-house calibrations should follow reliable procedures and protocols recommended by NIST or other recognized standards writing bodies.