Tag Archives: hplc

amandarigdon
The Practical Chemist

Calibration Part II – Evaluating Your Curves

By Amanda Rigdon
No Comments
amandarigdon

Despite the title, this article is not about weight loss – it is about generating valid analytical data for quantitative analyses. In the last installment of The Practical Chemist, I introduced instrument calibration and covered a few ways we can calibrate our instruments. Just because we have run several standards across a range of concentrations and plotted a curve using the resulting data, it does not mean our curve accurately represents our instrument’s response across that concentration range. In order to be able to claim that our calibration curve accurately represents our instrument response, we have to take a look at a couple of quality indicators for our curve data:

  1. correlation coefficient (r) or coefficient of determination (r2)
  2. back-calculated accuracy (reported as % error)

The r or r2 values that accompany our calibration curve are measurements of how closely our curve matches the data we have generated. The closer the values are to 1.00, the more accurately our curve represents our detector response. Generally, r values ≥0.995 and r2 values ≥ 0.990 are considered ‘good’. Figure 1 shows a few representative curves, their associated data, and r2 values (concentration and response units are arbitrary).

Figure 1: Representative Curves and r2 values
Figure 1: Representative Curves and r2 values

Let’s take a closer look at these curves:

Curve A: This represents a case where the curve perfectly matches the instrument data, meaning our calculated unknown values will be accurate across the entire calibration range.

Curve B: The r2 value is good and visually the curve matches most of the data points pretty well. However, if we look at our two highest calibration points, we can see that they do not match the trend for the rest of the data; the response values should be closer to 1250 and 2500. The fact that they are much lower than they should be could indicate that we are starting to overload our detector at higher calibration levels; we are putting more mass of analyte into the detector than it can reliably detect. This is a common problem when dealing with concentrated samples, so it can occur especially for potency analyses.

Curve C: We can see that although our r2 value is still okay, we are not detecting analytes as we should at the low end of our curve. In fact, at our lowest calibration level, the instrument is not detecting anything at all (0 response at the lowest point). This is a common problem with residual solvent and pesticide analyses where detection levels for some compounds like benzene are very low.

Curve D: It is a perfect example of our curve not representing our instrument response at all. A curve like this indicates a possible problem with the instrument or sample preparation.

So even if our curve looks good, we could be generating inaccurate results for some samples. This brings us to another measure of curve fitness: back-calculated accuracy (expressed as % error). This is an easy way to determine how accurate your results will be without performing a single additional run.

Back-calculated accuracy simply plugs the area values we obtained from our calibrators back into the calibration curve to see how well our curve will calculate these values in relation to the known value. We can do this by reprocessing our calibrators as unknowns or by hand. As an example, let’s back-calculate the concentration of our 500 level calibrator from Curve B. The formula for that curve is: y = 3.543x + 52.805. If we plug 1800 in for y and solve for x, we end up with a calculated concentration of 493. To calculate the error of our calculated value versus the true value, we can use the equation: % Error = [(calculated value – true value)/true value] * 100. This gives us a % error of -1.4%. Acceptable % error values are usually ±15 – 20% depending on analysis type. Let’s see what the % error values are for the curves shown in Figure 1.

practical chemist table 1
Table 1: % Error for Back-Calculated Values for Curves A – D

Our % error values have told us what our r2 values could not. We knew Curve D was unacceptable, but now we can see that Curves B and C will yield inaccurate results for all but the highest levels of analyte – even though the results were skewed at opposite ends of the curves.

There are many more details regarding generating calibration curves and measuring their quality that I did not have room to mention here. Hopefully, these two articles have given you some tools to use in your lab to quickly and easily improve the quality of your data. If you would like to learn more about this topic or have any questions, please don’t hesitate to contact me at amanda.rigdon@restek.com.

AOCS Highlights Cannabis Lab Standards, Extraction Technology

By Aaron G. Biros
No Comments

The American Oil Chemists’ Society (AOCS) held its annual conference in Salt Lake City this week, with a track focused on cannabis testing and technology. Cynthia Ludwig, director of technical services at AOCS and member of the advisory panel to The Emerald Test, hosted the two-day event dedicated to all things extraction technology and analytical testing of cannabis.

Highlights in the discussion surrounding extraction technologies for the production of cannabis concentrates included the diversity of concentrate products, solvent selection for different extraction techniques and the need for cleaning validation in extraction equipment. Jerry King, Ph.D., research professor at the University of Arkansas, began the event with a brief history of cannabis processing, describing the physical morphologies in different types of extraction processes.

J. Michael McCutcheon presents a history of cannabis in medicine
J. Michael McCutcheon presents a history of cannabis in medicine

Michael McCutcheon, research scientist at Eden Labs, laid out a broad comparison of different extraction techniques and solvents in use currently. “Butane is a great solvent; it’s extremely effective at extracting active compounds from cannabis, but it poses considerable health, safety and environmental concerns largely due to its flammability,” says McCutcheon. He noted it is also very difficult to get USP-grade butane solvents so the quality can be lacking. “As a solvent, supercritical carbon dioxide can be better because it is nontoxic, nonflammable, readily available, inexpensive and much safer.” The major benefit of using supercritical carbon dioxide, according to McCutcheon, is its ability for fine-tuning, allowing the extractor to be more selective and produce a wider range of product types. “By changing the temperature or pressure, we can change the density of the solvent and thus the solubility of the many different compounds in cannabis.” He also noted that, supercritical carbon dioxide exerts tremendous pressure, as compared to hydrocarbon solvents, so the extraction equipment needs to be rated to a higher working pressure and is generally more expensive.

John A. Mackay, Ph.D., left at the podium and Jerry King, Ph.D., on the right
John A. Mackay, Ph.D., left at the podium and Jerry King, Ph.D., on the right

John A. Mackay, Ph.D., senior director of strategic technologies at Waters Corporation, believes that cannabis processors using extraction equipment need to implement cleaning SOPs to prevent contamination. “There is currently nothing in the cannabis industry like the FDA CMC draft for the botanical industry,” says Mackay. “If you are giving a child a high-CBD extract and it was produced in equipment that was previously used for another strain that contains other compounds, such as CBG, CBD or even traces of THC extract, there is a high probability that it will still contain these compounds as well as possibly other contaminants unless it was properly cleaned.” Mackay’s discussion highlighted the importance of safety and health for workers throughout the workflow as well as the end consumer.

Jeffrey Raber, Ph.D., chief executive officer of The Werc Shop, examined different testing methodologies for different applications, including potency analyses with liquid chromatography. His presentation was markedly unique in proposing a solution to the currently inconsistent classification system for cannabis strains. “We really do not know what strains cause what physiological responses,” says Raber. “We need a better classification system based on chemical fingerprints, not on baseless names.” Raber suggests using a chemotaxonomic system to identify physiological responses in strains, noting that terpenes could be the key to these responses.

Cynthia Ludwig welcomes attendees to the event.
Cynthia Ludwig welcomes attendees to the event.

Dylan Wilks, chief scientific officer at Orange Photonics, discussed the various needs in sample preparation for a wide range of products. He focused on sample prep and variation for on-site potency analysis, which could give edibles manufacturers crucial quality assurance tools in process control. Susan Audino, Ph.D., chemist and A2LA assessor, echoed Wilks’ concerns over sample collection methods. “Sampling can be the most critical part of the analysis and the sample size needs to be representative of the batch, which is currently a major issue in the cannabis industry,” says Audino. “I believe that the consumer has a right to know that what they are ingesting is safe.” Many seemed to share her sentiment about the current state of the cannabis testing industry. “Inadequate testing is worse than no testing at all and we need to educate the legislators about the importance of consumer safety.”

46 cannabis laboratories participated in The Emerald Test’s latest round of proficiency testing for potency and residual solvents. Cynthia Ludwig sits on the advisory panel to give direction and industry insights, addressing specific needs for cannabis laboratories. Kirsten Blake, director of sales at Emerald Scientific, believes that proficiency testing is the first step in bringing consistency to cannabis analytics. “The goal is to create some level of industry standards for testing,” says Blake. Participants in the program will be given data sets, judged by a consensus mean, so labs can see their score compared to the rest of the cannabis testing industry. Proficiency tests like The Emerald Test give labs the ability to view how consistent their results are compared to the industry’s results overall. According to Ludwig, the results were pleasantly surprising. “The results were better than expected across the board; the vast majority of labs were within the acceptable range,” says Ludwig. The test is anonymous so individual labs can participate freely.

The AOCS cannabis working groups and expert panels are collaborating with Emerald Scientific to provide data analytics reports compliant with ISO 13528. “In the absence of a federal program, we are trying to provide consistency in cannabis testing to protect consumer safety,” says Ludwig. At the AOCS annual meeting, many echoed those concerns of consumer safety, proposing solutions to the current inconsistencies in testing standards.