I recently attended a conference in a field that is somewhat tangential to my current field. While attending one of the networking events, I found myself describing how my role as a method development chemist has shifted further and further away from method development and closer to (okay, nearly entirely) the world of method validations and instrument verifications.
Although I was conversing with a scientist who didn’t share my particular field, I was surprised to find out that they shared my experience, namely that new methods are becoming less optimized but more thoroughly validated. This isn’t surprising: fifty years ago, no guidelines existed on the parameters of scientific methods that should be validated prior to putting them into use. This is partially because scientific disciplines were smaller then, partially because scientists had fewer computational and statistical tools with which to analyze their data, and partially because many of the instruments (whose results can otherwise appear black-box-like without validation experiments) hadn’t been invented yet.
Fast forward to today, a world in which we have not only ever-more sophisticated instruments and statistical techniques, but also the inter-connectedness to allow collaboration and consensus among scientists to generate these sets of validation guidelines. The result is that many scientific fields are finding that where was once the methodological Wild West is now a long stretch of traffic-jammed highway, where method implementation can creep along in bumper-to-bumper traffic rather than enjoy the fast track to adoption. It’s bumper-to-bumper in the sense that many validation experiments are daisy-chained together, since many validation parameters require multiple similar experiments to be run on different days (and with replicates, of course). Now, to be fair, historically, or even in modern R & D environments, this highway to method implementation was never completely uninhabited; one still had to make sure that their method worked in a few basic ways. There were still a few landmarks to get past before declaring oneself at their final destination at the end of the fast track highway. Today, the final destination is the same: a functional, optimized, reliable method. But as anyone who’s driven in a major city can attest, even when your final destination is within your sights, it can still take nearly forever to actually get there.
On one hand, I support the standardization in method validations. There’s nothing worse than scouring the scientific literature, finding a method that should be just right for your application, and then trying it, only to find that it doesn’t work at all as described (this is graduate school in a nutshell). Perhaps the authors forgot about some key interferences, or the method works but isn’t particularly reliable. Method validation requirements help prevent such unpalatable experiences. Furthermore, I would argue that scientifically, method validation is simply the right thing to do to ensure that one isn’t reporting false or misleading results.
But as the requirements for method validations balloon, so too does the time from conception to implementation. I recently broke down some of the current validations on my plate into their constituent tasks and calculated the minimum number of GC or LC samples that needed to be prepared, best-case scenario, to complete each validation. The total amounted to something over 300 samples, or roughly 2-3 weeks of sample preparation for a full-time equivalent employee who has no other tasks on his or her plate. Of course, we all know that projects require more time for completion than simply the minutes spent in the laboratory, so this estimate can be multiplied several fold. At this point, the timeline has changed from weeks to months…and that’s with zero method development. So the question becomes, do we try to churn out these methods in a timely fashion, or do we spend the time to do it right? Sadly, I have seen organizations take the former position rather than the latter. After all, isn’t that the responsibility of the method development chemist in the first place, to develop a solid method that is truly optimized rather than just good enough?
We’ve not spoken yet about organizational agility, but this too deserved a place in our discussion. To remain competitive in a rapidly-changing scientific landscape, organizations must be able to produce and implement methods quickly. Ideally, organizations would identify landmarks that are barriers to method development and validation and remove them to maximize agility. Method implementation is a marathon and not a sprint, and running a marathon is much easier on land than underwater.
Finally, I want to acknowledge those who must deal with method validations from outside the lab. It can be frustrating to know that a method is being developed without seeing a complete validation package for weeks, months, or longer. But the scientists are not the enemy. We are trying to strike a balance between doing it right and doing it as quickly as we can, and even this is too slow for many managers. Listen and work with us, and together we can all work towards adopting new and improved scientific methods.
Leave a Reply
You must be logged in to post a comment.