Streamlining the Literature Review Process
Literature reviews play an integral role in the regulation of medical devices and technologies. We spoke with Evidence Partners’ CEO Peter O’Blenis to learn more about the challenges of literature reviews, their role in supporting patient safety and global regulatory approval, as well as new tools that help medtech developers prevent audits while increasing efficiency.
How do literature reviews contribute to evidence-based research in the context of EU MDR compliance and, more specifically for clinical evaluation reports (CER) and performance evaluation reports (PER)?
O’Blenis: Literature reviews have always been the cornerstone of evidence-based research. They are used in a variety of ways by medical device firms. The literature review is a fundamental component of a CER, which is part of the regulatory process for getting products approved and keeping them approved. Those CERs need to be created and submitted and then periodically updated and sent in to notified bodies for review to demonstrate safety and efficacy of the product based on what’s available in the literature.
If you are using traditional methods, this is a very arduous process. A literature review can contain hundreds or thousands and, in some cases, tens of thousands of papers. They, in turn, can contain tens of thousands, hundreds of thousands and even millions of cells of data. And you’re working with teams in collaboration, so it becomes logistically challenging. Using a software tool makes this process faster, easier to manage and safer from a quality perspective.
How do automation and artificial intelligence (AI) support the literature review process?
O’Blenis: When you conduct a literature review the first step is defining the question and then conducting a search for potentially relevant papers. Then you go through a process of screening those papers to determine which subset of those papers are relevant. That in and of itself is a fairly significant process. AI can jump in there and monitor which papers you are determining are relevant and which ones are not, and it can learn to make that decision for you. Then it can start doing some of the aspects of the screening process. And it doesn’t involve a lot of extra work for the reviewer. It just sort of happens in the background.
There are other aspects as well. The system will allow you to build what we call classifiers. They are AI models that can identify specific attributes. For example, we fed the system thousands of systematic literature reviews and we fed it thousands of papers that were not systematic literature reviews, and it learned to tell the difference. So you can have a question on the form that says, ‘Is this a systematic review?’ And the AI can answer that question for you.
The last piece is, when you’re pulling references in from different sources, you will get a lot of duplicates. They may not be formatted the same way. For example they may have different author fields or different structures to them. But, it’s important to remove the duplicates because if you do not, not only are you spending a lot more time reviewing things you don’t need to review but you may also be skewing your results by overweighting a specific paper. So you have to get the duplicates out. Traditionally people use exact match approaches—if it looks exactly the same, it’s easy the flag a duplicate, but the references don’t always match exactly. AI-based tools can look at the content rather than the structure so you end up with a much more powerful deduplication system.
Failing an audit can be costly for pharmaceutical and medical device manufacturers. How do automation and AI address these challenges?
O’Blenis: An auditor needs to know what was reviewed, who reviewed it, when they reviewed it and whether the people who reviewed it were qualified to do the review. So you need to be able to identify the provenance for every single cell of data that you submit in a table or a spreadsheet—where did it come from, and who put it there?
Obviously, if you’re using a traditional spreadsheet method, tracking the origin of that data to that level of granularity is extremely difficult and is almost never done well. Whereas with an AI-based platform, it’s very straightforward. It tracks everything that happens right down to the click, so we can tell you who submitted that data and how long they read that paper for before they submitted the answer and so on. This can really accelerate the audit because, if the auditor has a question they can just look at the system and pull that answer immediately.
The other component is finding errors. We have had customers tell us that when they were using spreadsheets for their reviews, it could take them three days to find an error and correct it. Using an automated platform decreases that time from days to minutes.
How can these advancements help manufacturers achieve or maintain compliance with global regulations surrounding post-market surveillance?
O’Blenis: Literature reviews are also used for post-market surveillance to identify adverse events and in the preclinical phase. In advance of a clinical trial, best practices would dictate conducting a literature review first to discover what the state of the art is. This allows you to keep the trial as small and efficient as possible by not doing things that have already been done, and that’s also much safer from a patient perspective. You’re putting fewer people at risk through fewer interventions.
Interestingly, we are seeing more emphasis on using post-market surveillance to get medtech products to market faster, as long as there’s a good safety protocol in place. Literature monitoring for the purposes of post-market surveillance is essentially a living review. It’s a review that is updated constantly. With an AI-based platform, you start with your literature review and then you develop questions around safety so that when papers come in, they get assessed based on potential safety issues and characteristics—they become a living review. You can set up automated searches and automated search retrieval so that as new papers are published, they can be fed into the system automatically and they can be deduplicated automatically. Then you just continuously update that review so you have an accurate assessment of the literature pertaining to safety at all times.
Evidence Partners has been in the news recently. I understand you secured growth financing. How will this help you to advance your company’s objectives?
O’Blenis: Until this year Evidence Partners has been completely funded based on revenue, and we’ve been growing very quickly over the past decade. The opportunity to accelerate that growth, double down on innovation and product development and better serve our customers came up. This funding will also allow us to double down on our go-to market strategy, which is to get our product in front of more people.
The product, DistillerSR, is unique in its space, and it serves an important role in the regulatory process, so we want to make sure we can get it out there and make it available to as many folks as possible. This funding will help us accelerate that process.
You also are preparing for your second annual summit, Evidence Matters 2022. What are your goals for this event?
O’Blenis: Evidence Matters is DistillerSR’s annual virtual summit for the literature review community. It is a one-day virtual event with multiple tracks so people can sign up for the tracks they are interested in. We started this last year, and it was a very successful initiative. We had about 400 people attending. We make it free to the community and we bring in industry experts to discuss a very broad spectrum of issues pertaining to evidence collection, literature reviews and all of the ancillary aspects of that.
It’s happening on September 27, and we’re really excited this year because we’ve got some amazing speakers. We have Dr. Devi Sridhar, chair and professor of public health at the University of Edinburgh, and Dr. Jessica Shen, vice president of the global medical sector at Johnson & Johnson.
In terms of industry representation, we have Sanofi, Integra LifeSciences and Roche. On the public health side, we have the Public Health Agency, USDA and the California Department of Pesticides Regulation group as well. We have a really interesting and diverse group of speakers looking at a variety of use cases.
I think this is going to be a very interesting space over the next couple of years as we see the blending of post-market surveillance and preclinical all under one umbrella. The challenges of literature reviews are not going away, so we need to continue to improve, and that is certainly what we’re focused on doing.
You can learn more about Evidence Matters 2022 and register here.
Original post: https://www.medtechintelligence.com/feature_article/streamlining-the-literature-review-process/
Post publish date: