Skip to main content

Advertisement

Advertisement

ADVERTISEMENT

Videos

Safety and Quality Implications of Using Artificial Intelligence in Oncology

Featuring Stephen V. Speicher, MD, MS

In an interview with the Journal of Clinical Pathways, Stephen V. Speicher, MD, MS, senior medical director and head of Healthcare Quality and Patient Safety at Flatiron Health, previews his upcoming panel session for the Association of Cancer Care Centers 50th Annual Meeting & Cancer Center Business Summit titled “Deep Dive: Artificial and Business Intelligence Technology.”


Transcript:

Stephen V. Speicher, MD, MS: My name is Stephen Speicher. I am a senior medical director and head of Healthcare Quality and Patient Safety at Flatiron Health. I'm a pediatric hematologist and oncologist by training.

Could you please give us a brief summary of the topics you'll be discussing during the “Deep Dive: Artificial and Business Intelligence Technology” session at AMCCBS?

Dr. Speicher: So of course, like I said, I'm a pediatric hematologist oncologist. I work at Flatiron Health and I oversee health care quality and patient safety for our health care business unit. So overseeing the quality and safety of all of our point of care solutions that are used by oncologists in their routine care of oncology patients. So we're going to be diving into a lot of different topics during this deep dive. I'm spending a good amount of time talking specifically about safety and quality implications of artificial intelligence in some of the tools that are kind of being put forward for use by oncologists. So really thinking about how do we ensure that these tools that are rapidly evolving, rapidly to being developed are being used for the safe and high quality care of cancer patients. A lot of growing concerns about artificial intelligence throughout a variety of different industries, and health care is definitely one of those. So we want to take an opportunity to really dive into artificial intelligence in health care technology specifically for oncologists, and what are some of those safety and quality implications.

How has AI supported physicians in their decision-making processes within the field of oncology?

Dr. Speicher: What I would say to start with artificial intelligence for oncology, it's still fairly nascent. It's still really a fairly nascent field more broadly. And then when we think about AI and health care, it's definitely still in its very early stages of being developed. The use cases that we're seeing a lot of in oncology specifically, a lot of them are related to operational workflow efficiencies to some more of the backend background use cases for oncologists and for oncology practices. I think that's being done specifically because there is so much concern about quality and safety related to these tools. We want to make sure that we're really starting with what I would consider more low hanging fruit and some of the use cases that are a little bit less clinically significant and with less risk overall. So we're seeing a variety of different things related to some of those operational efficiencies to workflow efficiencies.

Then we're starting to see some more clinical use cases. We're starting to see how can we really optimize clinician workflow, physician workflow using artificial intelligence in day-to-day practices. Some of the things that I'm really excited about as we take a look at what are some of those use cases, one of the things that I think is really interesting just as an oncologist, as someone that still practices, is the idea of ambient voice recording and the pursuit of better documentation. So what we're seeing is you can be a clinician seeing patients having an ambient voice recorder, take a listen to everything that's being discussed and some really incredible technology that is really evolving rapidly and being able to spit out a really pretty amazing visit note, trying to capture charges, do all the things that you would have to do well into at least either during your actual encounter or after your encounter. Things like this are incredible for physician satisfaction, for working on burnout, things like that. And also just really making the workflow for the physician much more efficient, much less time consuming, saving the physician a lot of time in their overall workflow.

How can oncology programs establish a robust safety framework around AI technologies, especially considering the internet biases and potential exacerbation of health care inequities?

Dr. Speicher: I think it's really important for oncology programs, for hospitals, health systems, oncology clinics, to really realize that they are crucial parts of really understanding the safety and quality implications of these tools. I think first and foremost, just they need to recognize that they are really the first line of defense in understanding these tools, in vetting these tools, in finding out what is going to be beneficial for their practice, what is going to be beneficial for the patient. And so just really that recognition. I think that's the first priority. I think we don't expect or we shouldn't expect for oncology programs to be experts or have an expert in artificial intelligence on staff. I think that's unrealistic to expect that. I think what do expect or what we can expect is to really familiarize yourself to a certain extent with the technology, with the products that are being used, with the limitations, with some of the concerns.

I think that's really important as these tools are developed. I think there's no doubt in anybody's mind at this point that artificial intelligence is the future. Artificial intelligence is really rapidly evolving. It's taking over in a lot of ways, a lot of really exciting ways, but I think we have to be realistic to the fact that we need a little bit of baseline knowledge to be able to ask the right questions. Again, I don't anticipate any oncology program having someone on staff that's going to be able to dive into some of these algorithms, some of these ML models, and really know the ins and outs of them. But you have to understand the technology enough to be able to ask the right questions, and I think ultimately having someone or designated individuals that understand the technology well enough to ask those questions is really crucial to a safety program for oncology practices.

What factors should oncology professionals consider when researching or meeting with the potential vendor partners for the AI solutions?

Dr. Speicher: I think what I've seen or what I would expect from a lot of these vendors is they're going to come in and they're going to try to, wow, that is their job. They are trying to sell you on their specific product, on their specific solution, and I would allow them to do that. You want to see what the product is actually able to do, but I would very quickly try to pivot the conversation away from what they're used to doing and that kind of product demo and really dig in to ask some really tough questions. Really pressure test the technology. So if they give you the opportunity to actually demo, throw curve balls at it, do not have the technology, do not set up the most straightforward example, find the most complicated example and see how the artificial intelligence handles that most complicated example.

I would say really pressure testing the technology is going to be one of the best ways that you can kind of start to discern what are some of the potential pitfalls, what are some of the potential limitations of this technology. To get more granular, I think some of the questions that I would always want to ask is, you'd want to understand the training data. So where is this data coming from that is training this tool, right? I think that when we think about the foundational, the foundational importance of data for artificial intelligence, I would want to know where that data's coming from. I would want to know what the data quality is. I'd want to know who's ensuring quality of this data that's really feeding this model, and that's ultimately the thing that's contributing to the output of this tool. So that would be a question that I would definitely dive into.

I'd also want to know, specifically, I mean ask the question. Say, “Hey, how are you evaluating quality and safety in your tool? How are you doing quality assurance on this specific tool?” What I'd ask questions related to their thoughts on the regulatory framework, one thing that we know about artificial intelligence in health care is that the regulatory framework is not nearly what it is for drug development. It's not nearly what it is for device development. We're working on it, right? So you see a lot of different government institutions starting to establish frameworks around regulations in artificial intelligence and health care, but we're not there yet. So I would want to know what are their thoughts on the regulatory framework? How are you guys thinking about regulations in this space, asking questions about that. I'd want to know what their quality and safety team looks like. I think any established health care technology company should have a team that's really thinking about quality and safety in their tools.

I'd want to know from those vendors, what are you specifically doing around quality and safety in this space? And then I'd want to know how you're encouraging the kind of safe use of these tools for clinicians. I think that that's a huge, hugely important part of how these tools are, how safe this tool is, is how these tools are being used by clinicians and how do the companies really best support the safest use of the tools going forward. So I think those are just a few examples. Like I said, I think the best thing that oncology programs, practices, health systems can do right now is just to familiarize yourself, really lean into it a little bit. I know it can be a little bit intimidating. It can be a little bit scary to say, gosh, I don't even know where to start here. I would say just familiarize yourself with some of the terminology. So when you are meeting these vendors, you will be meeting with these vendors. You feel a lot more comfortable in those conversations, and not that you don't know. The first thing about artificial intelligence,

Are there specific challenges or opportunities you anticipate in the future integration of AI in oncology, and how should programs prepare for them?

Dr. Speicher: I'm very curious how artificial intelligence is going to work in oncology. I can see so many opportunities in primary care, so many opportunities in what we would consider less complex medical specialties. I think anybody that practices oncology or is in and around oncology can attest to how complex and how complicated of a subspecialty it is, and it's just getting more and more complex. So I think on the one hand, there's so many opportunities in oncology for artificial intelligence to assist with things like clinical decision support and keeping providers really up to date with the most relevant information. On the other hand, there's a ton of questions on how you do that. As we continue to develop in the genomic space, how do we make sure, again, that data infrastructure is in place so that we have really strong tools, we need those strong ML tools.

We don't just need artificial intelligence. We need really good artificial intelligence to help with oncology care. I think the ultimate goal where I see this going is that I think there's a lot of people that have a lot of fears about artificial intelligence replacing the doctor, and I don't ever see that happening. I would love to see artificial intelligence get to a place where it's really just optimizing the efficiency for the doctors, for the nurses, for the medical assistants, for the front desk. As we can use these tools to optimize those workflows, we can get back to doing what it is that we are trying to do, which is critically think about things want. I don't want an artificial intelligence diagnosing my cancer or telling me exactly what cancer treatment I need, but I would like to have artificial intelligence really enhance the experience of clinicians so that they're making the best decisions.

They're making them in a timely manner, they they're efficient in doing so and they can get back to really taking care of patients. I think that's where I would love to see artificial intelligence grow with the oncology space. I think to keep those things in mind as we see the use cases, I think what we're going to see in the next several years is that we're going to really narrow down on the use cases that are most relevant for oncology. These practices, oncology programs are going to start getting inundated by, if they're not already by these vendors that are trying to sell all these different use cases. We're going to see what the most crucial ones are and really try to weed through those. Try to understand where is this? Where can we use this incredible technology to optimize our workflows, and where is it just not useful? Where do we really not feel comfortable with artificial intelligence being involved, and where do we feel like this is really going to solve some really important problems?

© 2024 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of Journal of Clinical Pathways or HMP Global, their employees, and affiliates. 

Advertisement

Advertisement

Advertisement