Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Overview on the types of analytical method validation

Definition: Analytical method validation can be defined as a method that is universally recognised by the laboratories as a necessity for an inherent system of quality assurance and also assures the analytical procedure used for a particular test is intended for its planned use.

Method validation is done to get accurate, consistent and reliable results which otherwise may lead to high costs since reliable method and analytical results are required for compliance in most of the areas of analysis. The guidelines for method validation were given by recognised international organisations such as ISO, AOAC and IUPAC. It is mandatory that laboratories should follow these guidelines and need to maintain them in their method validation techniques. It is the responsibility of the laboratories to provide correct analytical results to customer’s, in short “fitness of purpose” , i.e the analytical results shown are accurate and reliable. To fulfil the required reliability, the laboratories employ quality control, quality assurance and analytical method validation systems.

When a laboratory starts using a new method, it must check whether it has enough resources and competencies to use the method which means instrumentation, intellectual capitals, reagents, the analytical methodology, calibration standards and certified reference materials to run the method. Wood highlighted that “the extent of internal validation and verification of a laboratory depends on the case in which the method is to be used”. When assuming the degree of validation required, the laboratory must take into consideration about the already existing similar method and personnel, the customer’s requirements and if the method is already being used by another laboratory.

When and why a method needs to be validated?

A method is validated to check the performance of parameters designed as to whether they suit for the purpose of their design for a particular analytical problem .

It is done when:

A new method was designed.

A method is revised i.e. a method is being adapted in resolving a new problem.

A method already in use is changing in course of time when being reviewed through quality control studies.

A developed method is being used by other laboratories, with different equipment and by different analysts.

Check for equivalence studies.eg: a standard and new method developed.

Relevant validation requirements are specified by clinical chemistry in some of the analytical practice areas.

Method characteristics are grouped into three categories which are:

Application characteristics factors determine if a method can be implemented in a particular laboratory situation which shows cost-per-test, analysis of the species, sample size, total time, workload, equipment, regulatory and personnel requirements, space, health, safety and environmental conditions.

Methodology characteristics factors determine, which in principle, must contribute to best performance, which in general are concerned with the analytical quality control, sensitivity and specificity of the method and includes reference material, optimised conditions of test, principles of standardisation etc.

Performance characteristics factors determine in practice, how well a method can perform? These include recovery, accuracy, precision, detection limit, interference, limit of quantitation, linearity test. Judgement to performance characteristics are based on statistical validation techniques.

2).Types of method validation schemes:

2a).Validation Using Alternative Method

When a backup instrument exists for the analysis of a particular sample, the validation of analytical method preferring alternative method is done. Here there is a use of fully validated method. The fully validated method is used and acts as a reference standard against which the new method is validated. Now the validated methods must be checked and quality assurance techniques are to be put in correct places that ensure the accuracy of results to be used which are required to validate a new method.

2b).Validation Using Proficiency testing Schemes

The ISO guide 42 gives the procedure for performing proficiency testing schemes and the statistical analysis in ISO 13528 named as “statistical methods for use in proficiency testing by inter-laboratory comparisons”. The ILAC published guide G13:2000 which outlines the “Requirements guidelines for the Competence of Providers of Proficiency Testing Schemes“. These guidelines are for providers of proficiency testing schemes who wish to demonstrate the competence by formal compliance with a known set of internationally-acceptable requirements for the planning and implementation of proficiency testing schemes.

2c).Validation Using Certified Reference Materials

Reference material is the substance in which property value are sufficiently homogeneous and well developed to be used for calibration of an apparatus, the assessment of a measurement method, or assessing values for materials (IUPAC The Orange Book). Traceability to an accurate unit is given by the CRMs in which its properties are mentioned. When the reference material is analysed using the new method developed and if the results obtained are accurate and reliable with least precision, then the method is said to be validated. Note that well defined statistical criteria are to be checked against to make sure that the results are statistically accurate and the method is validated well.

Use of unacquainted method

Before using of an unfamiliar method, it must be checked and verified by the corresponding laboratory whether it meets the customer requirement. Priory the check must be done to the data that is published and later it is done on in-house or internal validation data. Then if the laboratory assumes a state where it feels to be competent with its method, it is must be approved and standard operating procedure format (SOP) can be written to the following method.

The indication of the additional information might be necessary:

• The data that the laboratory has on the method developed.

• The quality procedure which might be required or that have already been a part.

• Proficiency testing schemes in which the laboratory have involved in or participated.

• A recognized international standard that the laboratory confined to.

3).Total scheme of analytical method validation

Define Performance Specification

Device Development

experiments

Experiments

Method development

Execute and evaluate results

Plan method validation experiment

Analyte stability

Calibration model ,range and linearity

Precision and accuracy

Collate results

Method validation

Compile validation report

Method application

Apply validated method

4).Analytical Method development:

Development of a method is in fact closely related to validation of a method, and it is difficult to predict when actually a method is developed and where it is begins to look as validated. The method development includes many key performance parameters which are in fact actually evaluated, all this is done while developing a method to choose a method that is suitable for our purpose in the laboratory which can be very handy. The development of a method requires taking guidelines for defining the requirements, literature search (EPA, ASTM, SABS guides). The aim of developing method is to make a system that is easy for the laboratories to hand out an operation and also easy for the customers.

Method validation should include the following but not just limited to it:

1) Define the allowable amount of error, or may be preferable total error that can be afforded which might not affect the test or the method.

2) Suitable selection of experiments to calculate the analytical errors of expected type.

3) Proper collection and recording of the experimental data done.

4) Setting up the calculation of statistical data to pre-arrive at the expected amount of analytical errors.

5) Then, compare the achieved error with the defined or allowable error.

6) Estimate the reliability of the analytical method performance.

5).Key parameters in Analytical Method Validation

Specificity and Selectivity.

Linearity and Range

Accuracy

Precision

Limit of Detection

Limit of Quantitation

Robustness

5a).Specificity and Selectivity:

It is the detection capability of an analytical method to identify or uncover the desired analyte ad mist many other components (can be called interferents) such as matrix, excipients, degradants, impurities, many other potential contaminants.

The analyst has to take a note of the forms of active ingredient that could possibly exist. Such as these: complex form or free form, inorganic state or organometallic, and the oxidation states of some compounds such as Iron being in ferric (Fe+3), or ferrous (Fe+2) state. Interferents are those that can cause a change in the signal of the result by reacting with the analyte or itself being responsive to the method. Chemical interferences are also possibly to occur but the analyst has to decide a state where the changes due to chemical interferences could be reasonable. In case if there is a lack in the specificity it is recommended along with other supporting procedures. In practice there is no method so specific to detect a particular analyte of interest so, a combination of two or more procedures are required to achieve at the required differentiation.so, it can be said that ‘a method can be selective but not specific’. There is always a controversion between specificity and selectivity but rather most oftenly used term by the organisations is selectivity .specificity can be regarded as “ultimate degree of selectivity”.

5b).Linearity and Range:

Linearity can be defined as the capability (within a given range) of an analytical procedure to achieve at the test results that are linear which is shown by a direct proportionality of the analyte concentration in the sample. Replicate concentrations up to a minimum of five are required to establish linearity.

A visual way of representing the graph could be convincing. A linear relation literally could mean linearity between the concentration of the analyte and the test results which might be preferred but not necessarily. Parameters like slope, intercept, residual sum of squares and coefficient of variation must be shown as a it sometimes becomes necessary to establish a linear fit. A linear relationship cannot be guaranteed without sufficient precision. The most common way to do linearity test is least squares regression. A method is supposed to be considered linear as long as the bias is consistent over a range of assay, as it represents the accuracy of the method.

The equation of linearity is given by:

Y = mx + c

Where, y = linearity.

m = slope.

c = constant.

Graphical representation of the linearity

Correlation coefficient should be greater than 0.99 for linearity. Range of an analytical procedure can be defined as the array between the concentrations of highest and the lowest recorded of the analyte in the sample to which the analytical procedure developed has an acceptable level of precision, accuracy and linearity.

The evaluation of the Parameters like accuracy, linearity, repeatability etc. under a concentration range is characterized by range.

Difference between linearity and range:

Linearity is the ability of the method to establish a linear relation among the concentration of the analyte and the obtained test results, where in fact range is the interval between highest and lowest sample concentrations that maintains a linear relation. Range can lie anywhere admist limit of detection and limit of quantitation, where the highest concentration that can be measured with most accuracy is limit of quantitation(LoQ) and lowest concentration of the sample that can be accurately measured is limit of detection(LoD). A regression line that is fit to the data linearity and its correlation is given by the linearity.

5c).Accuracy:

Accuracy of an analytical procedure can be defined as the closeness of agreement between the obtained test result and an accepted reference value.

Accuracy is related to error, higher the accuracy of the result, least is the error. Accuracy is not a contrast to precision but it can be said this way ‘the result is likely to be more accurate if the results are precise’. Accuracy is strictly confined to the results and cannot be related to other entities such as laboratories, analytical methods. Error is the difference between the test measurement and the standard or true measurement.

Error is of two types:

Systematic error (related to accuracy).

Random error (related to precision).

Systematic error is an error component, upon repeated conduction of the tests results for a particular characteristic, varies in a predictable way or can remain constant. Accuracy in specific is related to systematic errors which can be manual errors etc. that could be amended or reduced by careful check out at each step of the process. Systematic errors are of known causes whereas random errors as in precision are of unknown cause.

Trueness is also in close relation to accuracy. Trueness is an expression of average or mean procured from many test results and the accepted reference value. Trueness is expressed in terms of bias, which is the expression of difference between the expectation of the test result and accepted reference value. Bias is systematic error in total, and can be a positive bias or a negative bias. If the difference between expectations of the test result obtained is greater than standard reference value then it is referred to as positive bias, where in if the difference between expectations of test result obtained is lesser than standard reference value, it is referred to as negative bias. A larger bias value reflects a larger systematic difference from the accepted reference value. A conventional true value is used in practice as it is not possible to get the actual true value.

5d).Precision:

Precision can be defined as the closeness of agreement between independent test results which are obtained under a stipulated condition.

As said previously, precision is associated with random errors, which are due to unknown causes. Random error can be defined as an error component, which upon conduction of repeated tests for results varies in an unpredictable way. Precision can be measured in terms of standard deviation, coefficient of variation.

Standard deviation (S.D) = ∑(xi – x)

N - 1

Standard deviation is the square root of variance. The smaller the value of standard deviation obtained then least is the precision. Precision depends on the conditions of measurement. The conditions are as follows:

Repeatability conditions.

Reproducibility conditions.

In the repeatability condition the independent test results are obtained using a same method on similar or identical test sample using the same instrument and equipment by the same person but in short intervals of time.

In the reproducibility condition the independent test results are obtained using same method on similar or identical test sample but using different equipment by different person in different laboratories.

Intermediate precision: By the cause of random errors there occurs a spread of test results near the mean value which is characterised by the intermediate precision. Intermediate precision can be evaluated by the data obtained from conditions of repeatability and reproducibility I.e. using same equipment but different operator and an intermediate time period.

Repeatability is the condition in which precision is being calculated under the conditions of repeatability, similarly reproducibility is the condition in which precision is being calculated under the conditions of reproducibility. Run to run precision is the precision obtained in which the independent test results are procured in separate runs and in the same laboratory using the same method, the same material. When a separate runs are conducted it is likely to know they are distinct in time being run, and so recalibration of the instrument might be good before each separate run. Instrumental precision is precision achieved by repeated measurements on the single sample solution prepared but with no adjustments in the instrument and in a short interval of time.

5e).Limit of Detection (LoD):

Detection limit can be defined as the detection of highest possible least amount of sample that can be measured but not specifically to the exact amount relating to accuracy terms.

It is generally expressed in terms of ‘3S’.where S is the standard deviation. Limit of detection is important in quantitative measurements as the concentration of the analyte present is small. The threshold frequency is not the end point of detection of the analyte as signal due to analyte tends to fade gradually as the concentration changes and up to a state of indiscernible response. But in contrast, for qualitative measurements the specificity becomes unreliable below the threshold concentration of analyte. By changing the reagents, spiking materials, fortification and repeating the experiment can cause change in the threshold value.

5f).Limit of Determination or Quantitation (LoQ):

Determination limit can be defined as lowest amount of sample that can be quantitatively measured using an analytical procedure with suitable precision and accuracy.

It can be expressed in terms of ‘10S’. As it is clearly shown, the limit of determination is always greater than limit of detection. Limit of detection cannot be used in decision making as it is just a indicative value. Neither of these LoD and LoQ shows levels at which quantitation is impossible. It is just that the same magnitude range of LoD is reached by uncertainity of measurement and result.

5g).Robustness or Ruggedness:

Robustness can be defined as the rigidity of an analytical procedure to remain constant or unaffected by small, but deliberate changes in the methods parameters.

It indicates the reliability upon the method during normal usage. It is known fact that when carrying out a run some of the parameters at some stage if not carefully watched can cause a severe change in the method performance which sometimes may even lead to total malfunctioning of the method. Such stages of should be identified and have to be checked for influence with evaluation by ruggedness tests. These tests involve the variation in the method which shows the influence of parameters on the method’s performance. Therefore is easy to identify the parameter causing the highest degree of disturbance and can be evaluated to a least possible degree of its influence on the method’s performance. In fact ruggedness tests are usually navigated to accuracy and precision.

6).Application of Analytical method:

A method is deemed to be applied if all the above criteria for developing of analytical method are satisfied and properly validated. These methods developed are checked with the standard methods already in use and their equivalence with those is a satisfying criterion for a method to be used or applied in practice by a laboratory or an institution. All the methods used now are licensed and are validated.

7).Conclusion:

In the conclusion, method validation is an important criterion in the development of a method; the parameters have to be evaluated properly as to fit for their purpose in the procedure. Sensitivity of an analytical procedure depends on how well the key parameters are done. The major use of any method developed is to get accurate, precise and reliable results. The results obtained from a method shows the performance of the method overall and can be made for a trust in it.

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays