In this episode of the HIPAA Insider Show, hosts Adam and Gil tackle the myths and realities of AI in healthcare. They discuss how artificial intelligence is truly being implemented, what’s working, and the crucial compliance considerations organizations need to keep in mind. Forget the hype—this episode breaks down what AI can actually do for healthcare today.

Transcript

Adam Zeineddine
Hello and welcome back to the HIPAA Insider Show. I’m Adam and with me, as always, is our resident tech guru and CTO and founder of HIPAA Vault, Gil Vidals, who’s probably got a couple of AI bots running in the background right now.


Gil Vidals
Trust me, Adam, if I did, my coffee would be a lot hotter right now. But in AI healthcare is exactly what we really should be talking about today. There’s this massive gap I see between what AI is actually doing and what’s purported that it can do at hospitals, you know, through the media.


Adam Zeineddine
Yeah, I mean, I’ve got a, an inbox full of vendors promising AI systems that can basically do everything, replace entire medical departments if I just subscribe.


Gil Vidals
Yeah, well, it’s kind of like selling oceanfront property in Arizona. Yeah, there’s actually what’s happening on the ground, you know, what’s really happening versus what the media shows. And I’ve been tracking some of the implementers, some of those that are actually taking AI and implementing it across the country. And I would say the reality is a lot less flashy, but more important to understand how AI is actually being used and how it’s being deployed and what assistance and what benefits that’s giving to the hospitals at the time.


Adam Zeineddine
Yeah, really excited to dive into that today. The topic, if you didn’t read the title on the video, is the real deal with AI healthcare. Before we get started, subscribe, like, comment, let us know your thoughts on the content. And yeah, you can also reach out to us at a podcast, the hipomotion, if you have any questions. All right, so let’s break it down for our compliance and tech focused folks. What’s really working right now?


Gil Vidals
Okay, let’s break it down this way. I’d say there’s three categories, three areas that AI is helping in the healthcare field. The first one is the clinical decision support. So that’s essentially helping doctors generate diagnosis. And there was an interesting study, for example, where it showed this came out last week. They tested Claude, which is a competitor to ChatGPT, and they compared what CLAUDE could do compared to surgical resident. And the resident actually acknowledged that the AI was better at analyzing written case descriptions and suggesting possible diagnosis.


Adam Zeineddine
Wait, so you say an AI is better than doctors?


Gil Vidals
Not saying that, just saying that there’s a crucial distinction that we need to understand that this AI was only better at a very specific task that’s analyzing the written descriptions of symptoms and then matching them to the potential diagnosis. So what that means is Very good at pattern matching in text. And this is very different than having an actual patient in a

room where the doctor’s diagnosing them in person because they’re reading body language or making a complex medical decision. So I would think of this more like having a really smart medical reference book that can actually analyze patterns.


Adam Zeineddine
Okay, so that’s one of the three main categories. What are the other two that you’re seeing?


Gil Vidals
Okay, the other one is medical education. There’s a medical educator who’s been using the CLAUDE API to help train doctors, and they’re seeing pretty good results with things like providing consistent feedback on diagnostic reasoning. And then the third area is administrative, that’s under the boring area, but actually can save on budget. So they’re using AI for documentation, transcriptions, coding, scheduling, things of that nature. And this is actually where we’re seeing some of the more practical benefits because it has fewer compliance hurdles.


Adam Zeineddine
Yeah. And what should our listeners be looking out for with these types of implementations from a compliance standpoint?


Gil Vidals
I think what I would say for this is that you have a key point of understanding what these systems can and cannot do. When I review the implementations, I’m seeing organizations succeed when they treat AI as just a support tool rather than an entire replacement for clinical judgment. The successful implementation has clear protocols for the human oversight. And most importantly, you have to have the clear boundaries on what the AI system can do and what it’s authorized to do.


Adam Zeineddine
Yeah, I’ve been seeing a lot of buzz about something called Alpha Fold. What’s the deal there?


Gil Vidals
AlphaFold does have some excitement that I think is justified. So the recent announcement was from Google, DeepMind and Isomorphic Labs, and they have declared that their latest version has a very significant development. And what they’ve done is they created a system that can predict how proteins fold and interact with incredible accuracy. And the protein folding is a big deal because that’s the engine of the body. That’s what the protein is like the factory. And it tells the cell to be a lung cell or a hair cell, eye cell. So they’re understanding that with the AI’s assistance.


Adam Zeineddine
I’m hearing notifications come through.


Gil Vidals
Yeah, sorry, I forgot to mute this. Okay, I’m meeting him now.


Adam Zeineddine

So what we’ll do is we’ll. You’re about to say why protein by protein folding is important, so I’ll just ask that question and then you go from there. Okay, let me know when you’re ready.


Gil Vidals
Yeah, let me just put my phone also. Okay, should be ready now.


Adam Zeineddine
And why should our listeners be interested in protein folding? Why is it important?


Gil Vidals
Okay, so what’s important to understand is how currently things work when it comes to drug discovery and research and development. So the way it works right now is you have these labs where you have These researchers, these PhDs researching, and they’re basically trying. It’s trial and error. They call it research, but think about it more like trial and error. They try something, hit a dead end, back down, try something else that doesn’t work, try something else. So they’re iterating, they’re trying trial and error until they finally have some success. Think about the AI with AlphaFold. That can exponentially speed up the trial and error process where they’re trying something and it doesn’t work and it does it much faster. The system is already being used by research institutions through the AlphaFold server.


Gil Vidals
And this is crucial for the audience to understand because right now it’s primarily a research tool. The compliance considerations are more about data security and research settings. And there’s really no clinical implementation right now. So right now it’s just more of research. But again, that trial and error, speeding up that cycle is really what AI is going to do. So instead of taking years and years to come up with a breakthrough that might be compressed down to months or even weeks, so it’s very exciting.


Adam Zeineddine
Let’s talk about something that is potentially causing a lot of compliance headaches, and that is AI in mental health support. What are you seeing there?


Gil Vidals
Yeah, for mental health support, Adam. I think we’re seeing. We’re seeing a lot of potential, but we’re also seeing a significant risk. There’s been recent coverage of people using AI chatbots to manage their anxiety. And then one, There is one widely discussed case that involves a username. Let’s just call him Joe. Who reported significant benefits from using AI as a supplementary support tool to manage his anxiety. But healthcare organizations have to be cautious because the compliance implications are more complex than with physical health AI applications. Smart organizations are implementing multiple layers of protection. Clear AI disclosures, for example, robust privacy safeguards and strict escalation protocols for crisis situations. Because if the AI is listening and contributing and saying positive things to the mental patient, and the mental patient says, hey, I don’t feel good, I’m going to.


Gil Vidals
I feel like I’m going to jump out the window, commit suicide. Those situations, how is the AI going to handle that? You have to have protocols to take all that into consideration.


Adam Zeineddine
Sounds like quite the tightrope. But what’s actually working Then in this.


Gil Vidals
Space, what’s really working in this space right now is something we call low threshold support. These are things like handling just the initial screening, maybe guided journaling, basic coping strategies. They’re making sure all of this is integrated into their, into the existing mental health support system with proper human oversight. And no one’s letting AI be the only line of defense in mental health care. Not at this point.


Adam Zeineddine
Okay, crystal ball time. What should our listeners be preparing for in the space?


Gil Vidals
Okay, well, instead of trying to get out my crystal ball, what I can tell you is that successful organizations are preparing for the next wave of AI. They’re focusing on really three areas. The first one is they’re getting their data house in order. By that I mean organizations will succeed with AI if internally their data is ready for it’s organized. The second one is they’re starting small and scaling smart. The best implementations I’m seeing begin with a narrow, well defined use cases. Maybe they’re just using AI for appointment scheduling or initial patient triage, and then they’re grounded. Yeah, exactly. And then the third one is investing in training. And I don’t mean just technical training, but ensuring the staff understands both the capabilities and limitations of the AI tools. And this is crucial for maintaining compliance and ensuring appropriate use.


Adam Zeineddine
And any final advice for our listeners out there?


Gil Vidals
Yeah, I would say where we are right now in history is don’t get caught up with all the hype about AI. Focus instead on the practical implementations that solve real problems in your organization. Take baby steps, implement things that save on your budget, save time for your staff. And just because a vendor says their AI system is HIPAA compliant doesn’t mean that it’s ready for prime time. So you have to do your due diligence, start small, and always have clear protocols for human oversight.


Adam Zeineddine
And there you have it, folks, the real deal with AI and healthcare. Remember to hit us up at hipbot. Com, where you can check out all the show notes. And that’s it for the episode of the HIPAA Insider show this week. Stay safe and stay secure