Editor’s 2024 Top 10: Coronary Physiology Instantaneous Wave-Free Ratio (iFR) Derived From X-Ray Angiography Using Artificial Intelligence Deep Learning Models: A Pilot Study
Dr Miguel Nobre Menezes shares background and insights on his Editor’s 2024 Top 10 original article, "Coronary Physiology Instantaneous Wave-Free Ratio (iFR) Derived From X-Ray Angiography Using Artificial Intelligence Deep Learning Models: A Pilot Study".
Transcript:
Hello, my name is Miguel Nobre Menezes. I'm an interventional cardiologist based in Lisbon, Portugal. I work in Hospital Santa Maria, which is one of the largest university hospitals in the country, and I'm also a professor at the Faculty of Medicine University of Lisbon, and a clinical researcher at the Cardiovascular Center of the University of Lisbon.
00:30: First of all, congratulations for your article being selected as a Top 10 Editor’s Choice for 2024! Can you please start by telling us what led you to develop an AI model for the estimation of coronary physiology in particular?
Thank you very much. It's an honor actually to be in the top 10 of JIC’s articles this year. So, basically, I've been a user of physiology for quite some time; but, as we all know, the use of physiology in everyday clinical practice still remains quite lower than would be desirable, very often lower than 10% of all PCI. And this has been the case in most places, including in Portugal. Furthermore, there's another issue, which is the fact that physiology is an additional invasive step when you're dealing with coronary artery disease, because you have to put in a guide catheter, put in the wire. There's also a layer of cost to it. And so, I thought that it would be ideal if we could get virtual physiology without actually having to put in a pressure wire. Now, there's software that does this already, but most of that software is not AI-based at all. And furthermore, none of that software is aimed at iFR, just FFR. And, of course, we all know that FFR is the gold standard, but iFR also has some advantages, especially in tandem lesions.
And so, taking into consideration how little implementation physiology has for most PCIs, the cost, the risks, and the limitations of currently available software, we felt that we should explore that as a research path in trying to apply AI to see whether or not we could get iFR data just from images alone. This was, of course, part of a broader effort that we've been developing over the past few years, which is generally applying AI to coronary angiography, because we still base much of our decisions on grayscale images, and it really hasn't changed that much since the 1950s apart from the improvements in image quality. And so, it's part of a larger effort overall.
02:37: What were the most significant challenges you faced during the study?
There were quite a number of challenges, actually. The first thing that we had to do before getting into the physiology part was we had to teach the models to recognize the coronary tree. In other words, we had to do segmentation. Now, there weren't many models available, and especially no models available capable of doing segmentation that we could apply when we first started this project a few years ago.
And so, we started trying out ways to segment the coronary tree. We first started quite a few years ago with object detection, which is kind of like those bounding boxes you can see at the lesions, but we quickly realized it just wouldn't be enough to proceed with the ultimate goal of getting some physiological data from the actual images. And so, we proceeded towards semantic segmentation, and semantic segmentation means that you get the coronary tree precisely extracted from the image and everything else is background. And so, the first thing that we had to do was develop segmentation models, and that alone took a very large amount of time, because you have to annotate images, and to annotate images, you have to get people who are acquainted with coronary angiography to do that. Ideally, you'd have physicians that are well versed in interpretation of coronary angiography doing that. And so that was the first step. It was a major hurdle because we had to manually annotate hundreds of images to get good segmentation models, and then we validated them with data from other centers, and we had to publish all that data. And so that step in itself was quite long.
With regards to the physiological data alone, I suppose that perhaps the single biggest hurdle we faced was in building the AI models, because we've tested a variety of architectures, and none of them seemed to be producing particularly good results. Eventually we got to 3 architectures that produced results that were good enough for publication because they had a very high negative predictive value. And so, there was a proof of concept that it was actually possible to extract iFR data from the angio image itself. But, it took us a very long time and a lot of training to get to that.
I would also say that one of the most important hurdles, even in this stage of physiology derivation, was also annotation because you need to have a large data set, and we had a reasonably large data set for an exploratory analysis. Then, once again, you have to manually annotate the images. So, you have to review the videos, you have to select the ideal frame, you have to pinpoint where the measurement was taken. And all of this after curating the data. Because, of course, if we're going to train AI models to handle physiological data, you have to rule out any case of poor image quality or cases where, for example, the measurement was made off-label like in patients with CTOs, for example. And so there was a lot of effort in doing that.
So, I would say that there's 2 major hurdles: first, curating and annotating data is exceedingly cumbersome; and, on the other hand, the actual architecture of AI models requires a lot of trial and error. And so that was clearly a major hurdle, and is still actually a hurdle, because our models are not yet mature enough for clinical deployment.
06:16: Has there been any progression in the models since the study's publication?
So, there's been a bit of progress. We've managed to improve the accuracy a little bit, but not that much, actually. And so, it's still not mature enough for clinical deployment. We have, however, made comparisons, and an article is currently in press. We've made comparisons with operator performance, and we've already shown that these models are actually slightly superior to an operator in guessing whether or not a lesion is physiologically significant. This is quite interesting, because our models have only been trained with 150 lesions, and all of the operators we compared with have seen much more than that throughout their lifetime. So, they're operators in their forties and so they've had quite a bit of exposure. But that's the main progression that we've had.
We've also implemented them in time temporal dimension, because in the original paper we only use the single best frame. But we've since developed the introduction of temporal dimension, so it actually analyzes all the frames. But we didn't get much of an improvement, which is actually in line with other software that doesn't really use AI, which today does not really require that temporal information necessarily. And so like, I said, there's been some progression, but still not mature enough. We're still fighting it.
It's a pretty difficult problem, actually, because if you think about it, iFR is proprietary of Philips and there's been a study, the ReVEAL iFR study, where they're trying to do this exact same thing. But to the best of my knowledge, they have yet to publish the results. And so, if the actual proprietary company and group of the index has not been able to virtualize this yet, it just shows you how difficult a problem this actually is.
08:07: Is your team developing or hoping to develop any AI models for other applications in the field?
Like, I said earlier, we've actually developed AI models for other applications. We've got AI models for segmentation if you look up our other publications. One of our engineers has actually placed the the necessary code on his GitHub page, so you can actually test out the models in a data set if you want to. And we've developed those and we believe those are quite useful because anybody can use those models, and they can actually use them for segment images and eventually train their own data sets. And we're currently working first and foremost on trying to improve our AI models.
But we are also working with LLMs currently. And we're working with LLMs, which, as you all know, are structures like ChatGPT or Gemini or Claude from Anthropic. And we're working with large language models to be able to process interventional data that may be integrated into clinical pathways as well as aid in terms of all clinical decisions. And so, we're currently training LLMs and verifying existing LLMs’ abilities to process clinical data as presented in electronic health records. Now, there are models that actually do this, but the reason why we're testing it out quite intensively is because LLMs can be language sensitive. And so, while they can handle multiple languages, they are less trained in less commonly used languages like Portuguese—they're a lot more proficient in English—especially in Portugal's Portuguese, because we write slightly differently from Brazil, for example. And so, we're currently working with LLMs to process EHR data on medical records written in Portuguese from Portugal. That's another field that we're currently exploring.
10:08: In the next few years, where do you see AI making the biggest impact on interventional cardiology.
So, there's where I see it, and where I'd like to see it. I'd like to see it permeate pretty much every aspect of interventional cardiology. So, thinking about being in a Cath lab, and it's kind of the big idea behind the whole larger research project, I would very much like to see us having an AI-based digital mapping of what we're actually looking at. So, the old days of 20th century grayscale image, visual appreciation of lesions, we have to move on from that. And so, like our colleagues in electrophysiology who already have very good mapping systems, some of which are AI-based, I would very much like to see that in the Cath lab or for structural and coronary procedures, but particularly for coronary procedures. And I would not only like to see that apply to the coronary angiography, but also to other stuff that we do. We've already got that with OCT Abbott software where they have AI features that enable automatic depiction of measurements of OCT images. I would like to see the same thing for IVUS, I would like to see the same thing for QCA. There's already software that does that for physiology. And then generally, a mapping system that would pinpoint the most important places to look at, and that would reduce operator heterogeneity as well as our tendency to overestimate the significance of lesions.
Now, I would also like to see in other fields, and in particular like I said, the handling of EHR, because electronic health records are very heterogeneous. They're written in a number of ways, there's a number of software. So, I would like to see AI processing EHR data much like a human would, but much faster than that, and at a level of a medical expert. And I would also like to see it in research, you know, like automatically identifying patients that are eligible for clinical trials or automatically identifying with cluster analysis or other types of AI-based analysis event prediction. So I actually think that AI will permeate every single step of medicine, albeit probably at a slower rate than what we see with general applications, because big data in medicine is hard to come by due to medical privilege, information, and software limitations, and also, of course, regulatory reasons, but I think that by the end of this decade we will start to see AI applications being applied every day in the Cath lab.
12:55: For cardiologists who are interested in incorporating AI into their process, could you share any best practices based on your experiences?
So, if you want to incorporate AI onto your practice, I would say this: if you're thinking about research, I would say that, bearing in mind that if you're dealing with image annotation, it’s a cumbersome process, and it requires a lot of quality control. So don't get 20 or 30 people annotating images, because the chances are that you will end up with a great variability in the quality of the annotations, and then we will be training the models on a diverse pool. A smaller team of very motivated people, highly hardworking, works better. The same applies, for example, if you're dealing with data analysis; you must have a solid ground truth. So that's the first thing that I would say, be careful with the quality of your ground truth. Otherwise, you will be biasing your model and training it with bad data, which is worse than not having any data at all.
In addition to that, I would say that you do need to partner with engineers. Of course, there's no way we can do this ourselves, even if you become somewhat proficient with coding, and I do Python coding myself. I can run models myself, but we need engineers. You cannot do this on your own.
And the last thing that I would say is, while in research we generally tend to stick with academic endeavors, I would say that partnering with companies and capital is essential for scaling things up, because at some point this will become quite expensive, it will require a lot of compute power. And so, I would say that partnering with companies can be, generally speaking, a good idea as well.
Other than that, you should just try and educate yourself with AI as much as possible. And you can use AI tools for educating yourself about AI; if you use Claude from Anthropic or Google Gemini, for example, or even ChatGPT, you can get educated about AI. But I think that what will happen is that actually AI will come to you. If you haven't dealt with AI already, it will naturally come to you just like computers have and the Internet has. And so, I would say that while you should maintain attention with regards to the field of AI, there's no need to be particularly obsessed about it, because it will eventually come to you. Most AI applications that we have right now are not particularly user friendly. So, if you compare how apps work on your phone vs even ChatGPT or Gemini, it's a lot less friendly, so it will get better. And eventually AI will come to you anyway.
15:45: Thank you so much for your time today. Is there anything else you'd like to share with our audience?
I'd just like to thank once more for the honor of including us as one of the top 10 articles of Journal of Invasive Cardiology. We're very pleased about that, and we hope you enjoy our articles and keep on reading your great journal.
© 2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of the Journal of Invasive Cardiology or HMP Global, their employees, and affiliates.