• About Dr. Bob Whitaker
  • About this blog
  • @@post_notification_header

    Product Testing, Part 8: Sampling is a Challenge

    Julia Stewart:
    Hello, this is PMA PR Director Julia Stewart, and welcome back to our “Ask Dr. Bob” audio blog series on product testing, “with PMA’s Chief Science and Technology Officer Dr. Bob Whitaker. This post is part of a series we’ve been doing on the topic of product testing. Bob, in several posts now you have indicated that sampling is actually a more significant challenge than actual testing…Tell us more.

    Julia, as we’ve talked about earlier in this series, the specificity and selectivity of pathogen tests is only half the equation – the other half is your sampling program or the method you use to collect fruit and vegetable material to be tested. From our previous posts, I think you can all see there are many challenges and benefits associated with the actual pathogen tests. Yet in many ways developing a sampling methodology that can achieve statistically significant confidence levels is more troublesome.

    So, let’s start with what we know … Based on the millions of pounds of produce harvested, shipped and consumed each day by millions of people throughout the country without illness, we know that the frequency of pathogen contamination is low. We also know from data shared at the Center for Produce Safety’s Research Symposium in June 2010 that pathogens do not survive well in production environments.  Indeed, two days after purposely spraying attenuated E. coli O157:H7 on leafy greens crops, researchers could only recover it by using enrichment techniques.  (By the way, “attenuated” means the pathogen’s disease-causing gene has been deactivated so the bacteria can be used in testing without risking making anyone sick.)

    So, because we face both low-frequency contamination and low pathogen survivability, it’s crucial for our sampling methods to be constructed so we can detect even sporadic, low levels of key pathogens. Further, contamination – when it does occur – it is not uniform. When contamination is found in a field, it tends to be random and isolated. That in turn can make follow-up testing a big challenge. Leafy greens growers commonly report following up on confirmed positive tests with extensive field or finished product-level sampling only to find that the  initial positive test results are seldom repeated.  The key to understanding sampling issues in produce is to understand the size of typical production lots or finished product production runs. 

    Just think about a single production block of fresh spinach. Let’s say the block is 10 acres in size; that’s about a day’s harvest for a small to medium size producer.  Planted at a density of about 4 million plants to the acre, our production block has approximately 40 million plants in it.  Each spinach plant at the time of harvest has 4-6 leaves, so choosing the middle of that range, that makes the total number of leaves in our production block around 200 million. 

    Today, sampling programs generally follow a “Z” pattern originally developed for pesticide residue sampling, which is a much different sampling challenge than microorganisms.  Along the “Z” pattern, the sampler chooses 15 points and collects 4 samples from each point for a total of 60 samples per block.  The size of the sample generally ranges from 25 grams to 100 grams per sample point or about 50 to 200 leaves.  That means a maximum number of 12,000 leaves are collected in any given field sample of 60 points.  These are generally mixed in a sample bag to form a composite sample.  From this composite, 50 to 200 leaves are selected to create a test sample.  So, in our block of 200 million potential leaves, our test comes down to evaluating 50 to 200 leaves. 

    Another way of looking at this is a commercial spinach field has an average yield of 12,000 pounds.  In our 10-acre block, that’s 120,000 pounds of harvested product.  Using the sampling program currently employed by many in our industry, we are attempting to represent that 120,000 pounds of product by sampling about three pounds of product, and then selecting a quarter of a pound of leaves from that to actually test. 

    Talk about finding a needle in a haystack!

    So why not just test more material then, Bob?

    To be sure, there many variations on the “Z”-pattern test just described.  Some are doing a “Z”-pattern test on each acre within a production block.  In the example above that would be a factor of 10 greater than just doing a single “Z” pattern on the whole 10 acres.  Others are using other patterns designed to pick up border areas as well as the center regions of a field, known as “box” or “box-X” patterns.  So while your statistics may improve by a factor of 10 or so, unfortunately you’re still in the needle in a haystack territory.  

    Remember, these contamination events are random, low frequency, and isolated. You could take a thousand samples from that same production block and only minimally increase the relative amount of product tested – and just as easily still fail to sample the exact location where the potential contamination resides. 

    And, by the way, the same sampling issues arise if you’re talking about testing finished product. Let’s say you are packing 60 to 100 bags per minute, which is standard for some products.  Simply removing five or 10 bags of product every hour or so and then making a composite sample gives you very similar statistics compared to in-field testing.

    So how exactly do you sample effectively to detect contamination?

    Well, with today’s technology, there’s really no satisfactory answer to that question.  There are new innovations on the horizon, such as environmental vacuums and other technologies that are being adapted from the Department of Defense, where screening vast areas for weaponized microorganisms has been a priority for several years.   However, given the state of sampling technology, it’s important to understand what sampling and testing can – and can’t – do. 

    Right now, the only contamination events we can feel reasonably sure to detect would be massive breakdowns in our food safety programs.  For example, if a pesticide applicator used a grossly contaminated water source to mix pesticides and then applied them to edible portions of the product, or an animal intrusion event where the animals were indeed infected with a human pathogen. Typical sampling programs might well detect this contamination, but these types of massive breakdowns have been well managed by Good Agricultural Practice programs and so have only rarely been associated with foodborne illnesses.  In other words, if a massive breakdown occurs, there are food safety programs in place to identify these issues outside of testing, and generally producers do not harvest the crop.   

    I always get back to risk-based testing. If a producer knows a risk event may have occurred like an animal intrusion, or if environmental conditions known to support pathogen survival might have been in place during production, one might increase sampling in specific locations.  Likewise, if you are producing in a field where previous potential positive samples were detected in past seasons, it may make sense to screen these fields more intensely.  It comes down to evaluating the risks associated with each production block or product run and then using the context of the physical evidence, observations, and other food safety data to better target sampling.  It means taking an active role in the sampling program and not simply putting the execution of the program on auto-pilot.

    Wow, Bob, good points to consider! Next time, we’ll explore the critical issue of raw versus finished product testing.  This is a central issue for our industry with important ramifications for supply chain logistics.

    Thank you, listeners, for joining us!

    Leave a Reply