Skip to main content

Advertisement

Advertisement

ADVERTISEMENT

Videos

An Introduction to Artificial Intelligence for Telepsychiatry

Featuring Steven R. Chan, MD, MBA


Artificial intelligence is the definitive hot topic across numerous industries, including mental health care.

In this video from the 2023 Psych Congress Elevate, Steven Chan, MD, MBA, clinical assistant professor, Stanford University School of Medicine, provides an overview of artificial intelligence (AI) for psychiatrists, whether practicing via telehealth or in-person. Dr Chan emphasizes the importance of understanding how different AI systems are constructed, and previews some exciting frontiers for AI application in psychiatry.

Find more insights for your virtual practice on our Telehealth Excellence Forum.

For more information and to register for the 2024 Psych Congress Elevate, held May 30 to June 2 in Las Vegas, visit the meeting website.


Read the Transcript:

Steven Chan, MD, MBA: I'm Steven Chan. I'm a member of the Steering Committee for Psych Congress, and I am also clinical assistant professor affiliated at Stanford University School of Medicine. 

Psych Congress Network: What do psychiatric clinicians need to know about artificial intelligence as it pertains to their practices?

Dr Chan: To understand artificial intelligence, I think it's best to boil it down to a few key principles.

For AI, it's a matter of inputs, what kind of things you're feeding into the AI itself, and then what kind of things the AI outputting. So, understanding how the AI was created or what kind of rules, what kind of data, what kind of patterns the AI uses is important, as well as what kind of things to expect when you're giving the AI specific inputs and what it's outputting. We've seen in the consumer space a lot of excitement over ChatGPT, Microsoft Bing, and Google's Bard, because these AI use specific types of machine learning—large language models—that take input from what the user types in, and returns a very, very human-like, in many cases empathic, supportive response; but in some cases, it may not be so empathic,  and may also include some information.

It’s important to know as well how the AI was constructed. Some AIs are not as sophisticated, and you have to input specific words and specific choices. Just think of the voice menus from telephone lines where you'd have to call and say specific things, like say "Pizza" for pizza, or say "Appointments" for appointments, and then you get the specific output that you want.

Within the realm of psychiatry, we're seeing this applied to things like passive sensing and passive data, where we are inputting, for example, someone’s location, GPS coordinates, to infer wandering behaviors for dementia, or maybe if someone were going too close to a place that would trigger an undesirable behavior like alcohol use disorder if someone were close to a bar or a place that sells a lot of liquor. We are also seeing things like active data, which includes efforts to infer the mood states from someone's voice, or maybe even their face with facial recognition and emotional affect. It's not something that's in use as a standard for psychiatry, but we can potentially see some use cases for augmenting a clinician's mental status exam or helping someone understand their moods better. I've also seen that in terms of autism as well, for things like eye movement or eye tracking. So, a lot of exciting things.

I think that there are some other things that we need to understand. Who is regulating what kind of things are being input and what kind of things are being output? We want to make sure that these chatbots or artificial intelligence agents and algorithms are providing the right kind of data output that is not biased towards any one specific culture or one specific language, or at least we're aware of what limitations there are, because we don't want to necessarily perpetuate any sort of stigma or any sort of prejudices that already appear in a lot of our research data and research studies. So, there a lot of challenges in this space. I think it's an exciting time for us. We just need to be very aware of all the different things that can go into a specific AI model before we use it or before we rely on it to even make decisions.

One final thing, too: clinician burnout is such a huge deal. Clinicians burn out because there's so much paperwork, there's so many notes to write. So I'm very excited to see a lot of electronic medical record (EMR) companies and other clinician-facing tools that will automate a lot of the drudgery that we spend a lot of time doing after hours. We just need to make sure that it provides the right answers. If anyone has seen the Simpsons episode where I think one of the characters said one thing and it got something else from the output from the device, we just want to make sure that, just like in all these different films and popular culture and our everyday use, the AI needs to be accurate and reliable and error-free as much as we can. That's AI for psychiatry.


Steven Chan (@StevenChanMD, www.stevenchanMD.com) is an actively-practicing physician at Palo Alto VA Health, specializing in psychiatry, clinical informatics, and healthcare technology. Dr Chan performs clinical research in areas of telehealth and digital mental health, with applications in underserved and minority health. Dr. Chan is a sought-after national speaker whose ideas, thoughts, and research have been featured by Talks At Google, JAMA, Telemedicine and e-Health, Journal of Medical Internet Research (JMIR), Wired, PBS, and NPR Ideastream.


 

© 2023 HMP Global. All Rights Reserved.
 
Any views and opinions expressed above are those of the author(s) and do not necessarily reflect the views, policy, or position of the Psych Congress Network or HMP Global, their employees, and affiliates.

Advertisement

Advertisement

Advertisement