Western Atrial Fibrillation Symposium 2025: Session 9 Roundtable, Part 2
Bridging AI Research and Real-World Clinical Practice for Atrial Fibrillation Management: Practical Examples and Lessons Learned
Bridging AI Research and Real-World Clinical Practice for Atrial Fibrillation Management: Practical Examples and Lessons Learned
© 2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of EP Lab Digest or HMP Global, their employees, and affiliates.
Edited by Jodie Elrod
Featured is the Session 9 Roundtable entitled "Bridging AI Research and Real-World Clinical Practice for Atrial Fibrillation Management: Practical Examples and Lessons Learned" from WAFib 2025.
Transcripts
Nicholas Peters MD, FRCP, MBBS, FHRS: I’d like to first frame this roundtable, and the panel can then introduce themselves as they speak. There is either a willful ignorance or just a blind spot to think that something that has proven to be very effective and efficacious, an innovation for which there's randomized data and undisputable effectiveness, will somehow find its way into clinical practice and impact our patients at scale. There is a “valley of death” there that has a lot of technologies, and we all know of examples that really should have impacted patient care but didn't. It wasn't because they weren't good, effective, or efficacious, but the process of taking them into real-world care was just not managed, and it needs to be managed—that is an active process. Implementation science is a science, and we must address that valley of death. So, that is what this session is specifically all about. I've asked each of the panelists to give a single practical example to start with, and then we'll let the discussion take on a life of its own. Dr Makati, would you lead off please?
Kevin Makati, MD: Thank you, and thank you, Nassir, for inviting me to talk. I think this is a very relevant discussion, especially with all the artificial intelligence (AI) evolving and emerging in atrial fibrillation (AF) management. An example of how we can best leverage AI in AF management is not so much the AI that we use to understand the mechanism of AF, but identifying and improving the yield of the patient matching the procedures that we perform every day, such as pulmonary vein isolation (PVI). How do we identify the ideal patient who is going to maximally benefit from PVI or posterior wall isolation? To your point, there is a lot of technology out there that doesn't make the light of day because we perhaps have not identified the right patient population. We can even extend this to areas in EP outside of AF, such as conduction system pacing or biventricular pacing. How do we identify the right responders using AI where we don't have the bandwidth as humans to integrate the vast amount of data that we're collecting, especially with implantable cardiac devices and clinical demographics, and make reasonable decisions when it comes to choosing procedures?
Tina Baykaner, MD: Thank you, Nick. I think this is a great discussion: what is actually the concrete evidence that AI has truly affected our patient care? I think what hasn't been brought up so far today and yesterday as much is the role of AI in the remote monitoring world. If you think of the “old days” of our remote monitoring reports from implantable loop recorders, the false-positives for AF were immense. One of the algorithms I'm quite familiar with is Medtronic’s, I know the other companies have similar algorithms now with an AI filter to interpret those signals and then assign them through false AF; they reduce that burden by 80% in our screens and day-to-day clinical practice of what we're looking at as an AF alert in patients with a loop recorder. This has been implemented by third-party monitors such as Implicity and Pacemate in terms of managing which alerts are more actionable. So, I think that has truly in real life changed our practice.
Nishaki Mehta, MD: Thank you, Nassir, for having me here this year again. I gave your questions some thought, and I was actually hoping to take it one step upstream. An area of investigation close to my heart is arrhythmias in pregnancy, and I think AI has a very good case use for real-world utilization in AI point of care ECG diagnosis of AF in this population. The numbers are there. Cardiovascular disease occurs in 1% to 4% of pregnant women, and one-third of arrhythmias. So, we have a high number to work with. To my knowledge, the only current trial in this space is the SPEC-AI trial in Nigeria, which showed a very significant impact of AI ECG diagnosis in this population, but that was in the peripartum cardiomyopathy space. So, I think AF is an untapped area to explore. In terms of the second half of bridging it in real life, the implementation science part to it, I have been personally using the Cardiovascular Data Science (CarDS) Lab through Yale, headed by Rohan Khera, where you can upload an image of the ECG and find out the predictive value of development of cardiomyopathy. To my knowledge, and I would ask all of you, I'm not aware of a platform where I can upload an ECG and predict AF that is open source. But I think combining the use of AI in a very high-risk, low-frequency population, which is first encountered by a specialty other than cardiology or EP, would be of great value.
Chan Ho Lim: Thanks for having me here, Dr Marrouche. When it comes to AI as a machine learning engineer, and when I'm having discussions with physicians, I often describe and divide AI into 2 different sections. One being automation, and the other as AI being used for discoveries. For automation, we already see a lot of examples out there. All the ECGs that are being used already have machine labels, and in a lot of imaging, there is already AI built into all the PAC servers in our practice. When it comes to discovery, it's a completely different story. We may have to bring changes to patient care, and in terms of practicality, it's even more difficult to discuss. But one of the ideas that we've been having in our lab is the personalization of AF, not only the personalization of the treatment itself, but also the personalization of the follow-up and the screening process using our AI models. So, those are some practical examples of how I view AI in medicine today.
Hamid Ghanbari, MD, MPH: That was terrific. As I was listening to you, Nick, and to all the great examples, there is a framework that I always use when I speak with our group, and what you highlighted here is the difference between an invention and innovation. It's not just that you come up with some new algorithm or product, you must innovate. Innovation is not just something new, it has to create value, and you have to capture some of that value. That value capture could be financial, or it could be in systems optimization, but you must be able to capture it. I think coming up with technology is really interesting and important. Implementation of it is important, but also, there is a lot more that goes into an innovation. You have to be able to monitor it and see if it's effective over time. You have to create a system where you can capture some of the value and put it back in a way that you can sustain it long term. So, there are a lot of challenges when it comes to AI. I want to take that and start with you, Nick. Can you speak to some of your experience, and to the panel as well: what are some of the major barriers that you see when you're trying to put an AI algorithm into use?
Tina Baykaner, MD: I can start with that. Yes, I think Sanjiv (Narayan) brought up an important point in the last session. The algorithms are only as good as the labels. If you have a perfect “This is the Beatles” label versus “No, this is definitely not the Beatles” label, the algorithm will learn, but we don't have the best labels all the time. We don't have the perfect PVI to say this patient did not respond to PVI, because we don't remap those patients 3 months later to prove that the PVs were isolated and then conclude that they had recurrence. Also, our definition of recurrence has been so ever-changing—is a 30-second presence or absence of AF my label, or is it a 99% reduction burden? Is that the label that I'm trying to learn? So, it's quite hard. It's a challenge in our world, that you don't have the best labels to teach an algorithm what to learn. I think it will be an ongoing challenge down the line.
Nicholas Peters MD, FRCP, MBBS, FHRS: Dr Mehta, were you going to say something?
Nishaki Mehta, MD: I think practically for physicians who are not AI scientists, it's ignorance. We don't even know when we are using AI—forget using it proactively. Digestive Disease Week (DDW), which is the largest gastrointestinal (GI) conference in the United States, focuses a half-day workshop on GI physicians understanding AI. So, I think it starts with education.
Nicholas Peters MD, FRCP, MBBS, FHRS: This is making some beautiful points. This free-flowing discussion has already hit on what I would consider some of the golden rules of implementation science and impact. I run the Health Impact Lab at Imperial Virtual Hospital in London, and I am responsible for the biggest AI deployment in NHS care as we speak, servicing a population of about 4 million patients. It's been very tough. Look, implementation science is a social science. We've moved from some of the really hard science that we've heard about to what ultimately is a soft science. You've heard of soft power, which is particularly pertinent at the moment, and this is soft science, but it's a very important science. The human element is really fundamental. Innovation is not innovation if it has no impact. There has to be some value. It's just invention. It's the wrong side of the valley of death unless it has become innovation and created impact. Of course, piloting something is a sure way to kill it, because piloting generally requires extraordinary discretionary effort and cost. That will not naturally segue to clinical delivery and business as usual. To Dr Meta's point, ignorance creates fear and the fear of change. Actually, if you talk change and radical and paradigm shifts, that's great for investors. They like to hear that. But let's face it, the social science is focused more on our colleagues than it is patients. Our colleagues fear change, so everything has to be presented as the same but better, the same but better, the same but better. Then, you take everyone on a journey to change without them even noticing.
Tina made a really important point about solving demand. It's no good creating efficiency if you don't then address the demand. I do not know if you're familiar with the Jevons paradox, but going back to Britain in the industrial revolution, when steam engines were made more efficient, it increased the demand on coal. It didn't reduce the demand on coal. Jevons, who was an industrialist at the time, coined a paradox that actually is very true today: if you improve efficiency, you increase demand, you don't reduce it. So, Tina alluded to an efficiency that could increase demand on the data overload that AI will generate. If we don't solve the demand that's created, we will have a problem. It's no good creating efficiency without addressing the demand that will result from it. That was a really important point.
So, these are some of the golden rules. Any comments? I will say that in the field of ablation and ablation of AF, we've had a very easy playing field, because in respect of all those golden rules, the idea that we have had ideas about how we can modify AF ablation and try it out easily within our procedural confines has been very much part of our evolution. Actually, we have not had a lot of barriers to entry for people trying to ablate fractionation or whatever it was that was done at the time. We've heard a lot about the posterior wall, and we've heard a lot about the left atrial appendage. We've had the ability in the context of treating our patients and a lot of our unknowns, to have implemented stuff almost seamlessly. Then, we have a community that responds very nicely to the social science of socializing these features and incorporating them. So, I think our evolution in EP has been remarkably easy, but there are some elements now with AI where we're going to have to face the problems. There's a comment.
Audience question: Yes, thank you very much. This has been great so far. Dr Ghanbari, the issue of value and innovations, thank you for that. So, we get all these big hypes, ECG can tell you what the ejection fraction is, and ECG can tell you what the gender of the patient is, but in the end, what is the value? Am I still going to have to do an echo? Yes, maybe. The patient is there telling us their symptoms and we're there to listen to them. So, my question is, it tells you that the AF is low and then you have to do all the other work, and this is again going to the efficiency increases demand—this is true. We see it in our own lab, that we're improving efficiency but we're getting drowned with patients even more. So, I think this issue of “value” has to be stressed to everybody. What is the value in this? What is it helping us solve? If there isn't, then it's just type. What are your thoughts on that?
Hamid Ghanbari, MD, MPH: I'm so glad that you brought that up, because I feel like I have this conversation every day with our technology partners. I think that when you're thinking about AI, everyone's talking about AI and it seems like a cool thing to say all the time, but like you said, it doesn't oftentimes solve a real problem. So, you have to go back to the drawing board when you're thinking about this. The way I always think about it is that you have to start with what your organizational mission is. For us, it's to improve health in Michigan and the world. Then, we have a set of strategies around achieving that mission. Now, AI can enable those strategies to become better. So, if you have an AI strategy that helps research, then you can formulate a set of initiatives that can create value around those strategies. I think you really have to be careful and thoughtful by thinking through that. So, the way we are thinking about it, I don't see the real value being created with all these fancy algorithms. I think the real value right now in our institution is being created in the back office in that low quadrant, low-hanging fruit where pre-authorization can be done quicker. Scheduling for outpatient labs can get done faster, because that way I can pre-authorize and pre-approve the direct way faster, and that translates into value that people can understand and capture. I think for us, that's where we start. Over time, as you develop organizational literacy, then you can move into more complicated AI projects that are patient facing and that require a lot of algorithmic expertise and implementation. So, that has been our approach. I want to hear some of you on the panel talk through that.
Chan Ho Lim: Yes, maybe I'm just echoing what Dr Wazni and Dr Ghanbari said already, but I really believe in the value of things growing organically out of demand. The cities that organically grew out of demand from the people instead of a project city that was developed by one rich guy with some vision that sounded pretty smart. AI is the same, and it's not just a problem in medicine, but all over the world—there are too many AI models that are there because it's cool and they can do it, like ECG age or ECG gender, and so many other things. But which physician wanted to use this in their clinics? These are the things that I think us as a community can think about when we're developing AI models, the demand that can organically grow our community when we're developing our AI models, not just because it's cool.
Kevin Makati, MD: I think I'm a bit of a pessimist. We've had a lot of technical wins with AI and practical failures. If you look, there's a review that was just published looking at AI implementation in AF management over the last 10 years, and the number of publications now just putting AI in the title of your publication has skyrocketed, but has it really meaningfully changed our clinical practice? That's the problem—translating the featurettes to actual clinical management. I think there's still a wide gap. The other thing that's about to happen is the FDA just released a draft proposal on implementing AI. So now, companies and vendors will have to go through a process, whereas before there was really no guidance. Now there's an actual process, there's an actual 510K pathway, so you can't just add AI to whatever you're doing, such as mapping software—there's now an actual validation process. So, the time it takes for a new AI feature to actually make its way into clinical practice where it's actually making us more efficient and reducing the number of lesions we do for ablations—that time has now just been extended. So, I think it will take some time, in my opinion, before we actually have meaningful clinical changes in our practice.
Tina Baykaner, MD: May I be the optimist for just a second? I think some of these models are new. They're established models like the Mayo ECG. We know the performance of the model. But from the last presentations from the Tulane group, and we've alluded to that too, imagine you've done an ablation, you've had an ECG afterward in the PACU, the ECG reads as likely to have an early recurrence, and the accuracy is really reliable and high. They're going to have a recurrent AF maybe in the next 2 to 3 weeks. Maybe you would be more likely to monitor, give them a call sooner, or keep them on an antiarrhythmic. I think if the models are reaching to performances that can affect our day-to-day life, they are actionable. I think they have value to what Dr Wazni said. If we knew how the patients would fare, that would give us value. Dr Marrouche brings up at every presentation that we have a ceiling of 50% in our persistent AF in any randomized clinical trial done. If we had a mapping mechanism with AI that takes us beyond what we can signal process with the existing tools and that can take that ceiling up to 70%, I think that is value, because we know patients who do well have less adverse outcomes. So, I think there's a lot of hope for the use of it.
Nicholas Peters MD, FRCP, MBBS, FHRS: Undoubtedly, there's vast grounds for hope.
Audience member: I just wanted to address the point that was made—I fully agree there's so many AI applications that failed, but the only way to really make it work is to have the clinician be really involved in its implementation. For example, in the Netherlands, we've seen a mass adoption of AI, but the only reason why it's working is because they're not letting the technologists play around by themselves—they're playing a very proactive role in the design and development of the software in how to do the alerting. So, don't let the AI people do their thing; by working together and holding their hands, you know exactly what's happening in the clinic. We've been able to get mass adoption, but only by working closely together. So, that's what I wanted to share.
Nicholas Peters, MD, FRCP, MBBS, FHRS: That's a great comment. Of course, that's the strength of this meeting right here, right now—we now have industry physicians and data scientists who are talking the same language, and that's part of social science. It's just getting the language and the culture right.
Thomas Deering, MD: That's a great point, Nick. Having people with diverse backgrounds and stakeholders really allows us to get to a better place. I'd like to question all of you and put a balance between pessimism and optimism, and put it in the middle: realism. I know we use the term AI as artificial intelligence, but I have 2 other approaches that I take to it: augmented intelligence. In other words, just like the industrial revolution you spoke about, Nick, it should make us better, and then ancillary, it should be a partner. You talked to Kevin about some of the things about how it is failing in terms of allowing us to develop processes to do things, but I do think there's great hope there. We've got to figure out how to do it. But if you're a primary care doc and you have a hypertrophic cardiomyopathy patient who meets criteria for these medication interventions or procedural interventions like a defibrillator, that would be helpful. I think it can remind us. The other thing, and I'd love all your thoughts on this, is when you finish your day at work and you are assigned on that particular day to read all the device tracings that come through. I love sitting down at 10 o'clock at night with a 200-page report that I have to go through. Hamid, you absolutely do love it, but AI has done a better job for us. When can we start using AI as a real partner? All we need to see are the things that are truly problematic that need intervention. So, I think there's a lot of opportunities here, but could you weigh in on process going forward and what we need to do, as well as a partner that we can rely on for helping with interpretation? As you know, ECG reading, monitor reading, and echo reading sometimes are done better by those particular components than by us poor clinicians.
Nicholas Peters, MD, FRCP, MBBS, FHRS: I'm delighted with how this session is shaping up. We could almost write a white paper between us in this room, because we've distilled some really important points. Tom, I think you gave more than you are asking with your comment, because you gave a lot of very insightful points. But there were questions there. Are there any answers from the panel?
Tina Baykaner, MD: I think we've come a long way right now. We talk on the same panels about some about remote monitoring and how AI has helped reduce the false-positive burden of pauses of AF detection, etc. So, I am pretty sure you are spending an hour less every week on reading those false-positive reports today. I know AI is in every patch monitor right now. iRhythm uses AI as a part of their rhythm allocations. At Stanford, large language models are in the background listening to your clinic visit and then transcribing a note for your clinic visit. So, that's today's implementation of AI to save you time. I know it hasn't quite shaped our AF management ablation strategies or clinical follow-up strategies, but I think it's already there and I think it's saving us a good amount of time already.
Hamid Ghanbari, MD, MPH: I want to maybe share an anecdotal experiment with you, Tom. We had this idea that we built this really nice AI algorithm that could read all the easy ECGs and give the difficult ones to the clinicians. So, we did a little experiment and it very quickly drove everybody crazy, because you don't want to sit there all day long reading very difficult ECGs. Our clinicians were actually happy to see sinus rhythm, so we had to abort it.
Nicholas Peters, MD, FRCP, MBBS, FHRS: Therein lies a really important social science element. So, here's the thing. The amount of AI that has been incorporated into clinical practice by our profession as an entire ecosystem globally across specialties is woefully little. It has not thus far lived up to its promise. The promise of the first line of every grant application written in the last 10 years, that being this best thing since sliced bread is going to change the world of clinical practice. We know that's how these things are written and it's the last line of the conclusion of all the papers from the last 10 years, and it just hasn't happened. I'm going to put down a challenge, and actually, I'm going to say that the responsibility lies with us.
I do think this room is a hot spot. If you were to plot the level of vision on the face of the planets at the moment, there is a hot spot here. About the FDA pathway, I always question who are the people who have input to these pathways, and what side of the valley of death is it? Because I would argue that the FDA pathway is still the wrong side of the valley of death, because that doesn't really address the social science of getting stuff into clinical practice, getting our colleagues to use it, and ultimately getting to the guidelines. So, I think that's not the answer. That's still the wrong side of the valley of death. I'll come up with my challenge a bit later, but we have a comment.
Kamala Tamirisa, MD: Thank you for this incredible session—there has been a lot of open dialogue. As an optimist, I read cardiac MRIs and I can tell you that with the AI algorithm, the time has really cut down. Before that, we were drawing contours that would take forever. I have 2 questions for the panelists and anyone: are you aware of any AI as part of curriculum development for fellows in cardiology or EP, and do you think it's time to think about that, because we need to train the future generation? The second question is that burnout and workload is a lot; we have AI scribes, but is there anything that we as physicians can do to make sure that the AI scribe is integrated with the EMR systems so they can be effectively used? Thank you.
Nishaki Mehta, MD: I'd love to take your second question. We recently got an AI scribe in our practice, and I was probably one of the first adopters, and I found it was very inefficient. It was not what they wanted to hear. With every cardiology note, I had a general health maintenance and a PCP structure, but I think what it brought to mind is I have to keep working with it so it'll iterate to the point of efficiency, and that will mean 1 month or 2 months or however long it takes for it to learn what I needed to go into. The second phase of that is that it introduced coding. So, it looked at my note and it started telling me how much to code or what level to code. To me, that is huge, because our system requires us to respond to coders within 72 hours, which is dreadful. But if they use the AI scribe, they take away that burden from me. But to do that, I have to use the system and it's a pain. A lot of my older colleagues prefer just using Fluency Direct, which is a headset versus AI. Again, I think that brings up education and patience. It's not going to be a panacea, but it'll be an augmented or a facilitating partner.
Hamid Ghanbari, MD, MPH: Maybe I can get in a couple of extra points on that. For us, the EMR scribe, I do not know how much you pay, but we pay $500 per license per month, so it's really not something you can use a ton of because of the expense of it. As the costs come down, that may become something that is more useful, but right now, it's hard to largely deploy it. For your question about it making the false-positives easier, what you do with that extra time becomes the question. In our institution, you get paid per number of things you do, so you end up putting in more things that they have to do. From a quality-of-life perspective for a physician, they end up doing maybe more than what they were doing before. So, it doesn't necessarily improve their life that much more.
Kevin Makati, MD: I just wanted to add very quickly to the first part of your question on education. I'm not aware of any formal curricula in place for cardiology and certainly not EP fellowships. To expand on that, AI should be something that everybody knows how to leverage, because it is going to be part of our lives at some point. I told my kids I will not answer any questions anymore. If you have a question, you should download Grok, Copilot, Gemini, or any of these tools that are available in the public space, and it will provide you a better answer than I could ever have.
Nicholas Peters, MD, FRCP, MBBS, FHRS: Which raises an important question: why do we learn anything anymore, when we have more information in our pockets that we can possibly carry in our heads? But you're right, computer science taught in educational establishments has gone from working the Microsoft programs to coding, and coding is now a thing of the past. AI has to be fundamental, and we surely do need to introduce courses.
Audience question: I want to speak a little bit about value. The value of predicting who's going to have AF on an ECG is really not that great, except if you want to motivate them to lose weight or things that. The real value would be post ablation predicting who's going to have AF, because that would allow you to change treatment. But can we use the pre-ablation AI ECG to predict AF as a post? Because I think there'd be a lot of changes. Would you believe it post ablation?
Tina Baykaner, MD: We have looked into that. Our paper was published in 2023 in Circulation: Arrhythmia and Electrophysiology. Exactly to your point, we incorporated preprocedural 12-lead ECGs into a predictive model for patients. Around 150 patients underwent ablation of any methodology who had follow-up and either had recurrent AF or not. Preprocedural ECG was quite predictive of who did well—maybe it's reflective of the underlying substrate. You see how diseased LA is from signatures in the preprocedural ECG alone. So, that was quite impactful. ECG alone was more impactful than all the clinical factors you can incorporate into that score, and they were additive. So, if you combine ECG with age and gender and other things that matter for AF, that model performed quite well.
Nicholas Peters, MD, FRCP, MBBS, FHRS: We've got very little time left, but what would you like to say?
Audience question: Well, 2 points. One is we have data at this symposium about the previous question that was just asked. We have P-wave duration data on pre-ablation that predicts 9- to 12-month outcome in a USC cohort of about 300 patients. That's just P-wave duration with area under the curve. As Tina said, that's been validated in other studies, and the AI algorithms are good at predicting post ablation outcomes. The second point is to you, Nick: do you think the social sciences are easier to implement to get across this valid in a system like the NHS, where we have less competitive influence? Or is it actually more difficult when you have more inertia?
Nicholas Peters MD, FRCP, MBBS, FHRS: Yes, every health care system on the face of the planet has big problems and the NHS certainly has enormous problems. I'm going to just use the last few seconds with my anecdote and challenge. So, back in the days before COVID, when you had coins in your pocket, I used to put my pound coins into a jar. Recently, I got fed up moving this jar around and took it over to the bank. I put it on the desk, and the guy says, sorry mate. I said, what do you mean? He said, we don't have money in here. I thought to myself, I was on my way to a clinic, I was “suited and booted” as I am now, and I thought, I do not know when I last went into a bank, I do not know when I last spoke to anyone in a bank, but I can look at my phone and transfer half a million dollars just like that. Here I was, going to a clinic to see people who are continuing to come to see the bank manager like you had to 40 years ago to ask for an overdraft in person, and I was about to see 15 of those patients in a clinic. We must do things differently in health care. It's about changing our behavior that is fundamentally important. We've got to change. The population will come with us; we've got to change our own expectations.
So, look, this has been the most fantastic session. Really awesome. I'd like to thank the panel and Hamid my cochair, but particularly the floor, I think it has been electric. Thank you.
The transcripts were edited for clarity and length.