Frontline Mobility Edge

AI in Healthcare: What's Ready, What's Next, & What's Not

BlueFletch

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 27:14

Every healthcare conference in 2026 has the same message: AI is the answer. But is it? 

Brett Cooper and Lee DeHihns break down what's actually working, what's still hype, and who's looking out for patients when the technology moves faster than the guardrails.

In this episode:
→ Augmenting vs. replacing: Where AI fits in a doctor's toolkit today (and where it doesn't)
→ Shadow AI: Clinicians are already using unapproved tools, and the HIPAA risks are real
→ Patient advocacy: How patients are using AI to prep for appointments, decode insurance forms, and get second opinions
→ The healthcare CIO's dilemma: Where to invest limited budgets when 60% of hospitals run at a loss
→ Why "buy a screwdriver, look for screws" is the wrong approach to AI adoption

Brett and Lee draw from months of healthcare conferences, real doctor visits, and frontline conversations to separate signal from noise.

Why AI Dominates Healthcare Talk

SPEAKER_00

I'm Brett Cooper and this is the Frontline Mobility Edge, where we discuss the latest in mobile device technologies and how they're shaping the frontline landscape business. Thank you for joining us. Let's get started. Welcome to this episode of the Blue Fletch Frontline Mobility Edge. I'm Brett Cooper joined by Lita Hines, and we are going to be talking about AI and healthcare today. So I think the title we have for this episode is AI in healthcare. We originally had had it as AI as coming for doctors' jobs, which seemed a little scary because my doctor friends be angry at me. And but I think what we really settled on is what's ready, what's next, what's not, and then who's watching out for the patient. So really thinking about a lot of the topics. And that the nexus or origin for this discussion was Lee and I have been to a number of healthcare conferences over the last, I want to say probably four months. And I feel probably 95% of the presentations we went to, there was some segment that was AI, or maybe even the whole thing was about AI. But it's a very interesting topic. I know you and I took a lot of notes, came back with a lot of interesting items that were, you know, some were thought-provoking, some you and I didn't agree with some of the panelists, but it's always good to have contrarian opinions. So we want to talk through this at a high level. And I guess Lee, from a framing perspective, can you talk a little bit about, you know, when you think about this topic, what do we want to cover today? What are the key, if you had to embody this whole thing, what would you say the description is?

SPEAKER_01

Yeah, I think there's a couple things. Um, you know, when you look at the the current state of health care outside of AI, um, demand for care, for patient care is growing. Doctors and nurses are getting burned out, getting asked to do more with less. Patients are having to wait longer. So maybe even getting a chance to see doctors less than they normally would. So it's resulting in a bit of a burnout on every side. So I think all the conferences we've been to, every booth except maybe ours, mentioned it mentioned AI is the answer. Um, and so I think what you really want to frame within that, though you can't just look at healthcare writ large and say AI will fix it. You have to frame it in a manner that is what is solvable with that. And that's some similar to what we do with other problems that we solve, right? From what healthcare workers are facing today, what patients are facing today, and what existing workflows might be made better with the advent of AI or the introduction of AI, what other ones aren't ready, and then how do you figure out when and where to apply those things.

Augmenting Clinicians Versus Replacing Them

SPEAKER_00

Got it. So and just for context for everyone, this is being recorded at the beginning of Q2 26. So if you listen to this in 2028 and AI has already taken over the world, we're sorry. But um, yeah. So the first segment or area we want to talk about was augmenting versus replacing. And I think there is the you know, the the disc a lot of you know a lot of early discussions were um AI is going to replace doctors, AI is going to replace nurses. The way I feel like most people have discussed it and and it's really um moved into is that AI is a tool, like every other tool in the doctor's tool um or doctor's bag or the nurses toolkit, um, and it is something that is going to augment and should be improving their job, um, which I guess is the key, like you should. Um so I I think when you think about that that argument, are there things that are this is a clear winner for this should be augmented by AI, these list of five things, and then there may be one or two things that might be replaced by AI because it's just not something that doctors want to be doing and nurses want to be doing. Right.

SPEAKER_01

I think that makes sense. So, like, you know, from from the tasks, when you think about it, um, you know, personally I can say having been to the doctor uh a few times over the last six months or so, I've had two different physicians come in, um, pull their device out of their pocket, their personal device, um, put it down on the table in between us, and tell me that they're using it for dictation, for background listening, for charting, and for recording records for EHR. So definitely that is something that we're seeing, I think, a consensus being built around in terms of something that would augment what is already, what already has to be done. It's a lot easier than taking notes while you're engaging with the patient, or maybe even having your back to the patient and typing. We've all been in that room. Um, so I think it it does give an opportunity there from an augmentation perspective, for really there to be additional one-on-one time with the patient, or when a physician is with a patient, it's one-on-one time and not watching somebody chart. Um, so I think that is one of the tasks that is probably going in that direction. Um, what follows with the background listening, also, if you think about things like charting for medications dispensed or um different procedures done in a room, there's going to be codes that need to be entered for that for insurance reimbursement purposes. I think that billing segmentation, billing automation are other things that are probably ready for it. Um I think there are other things like clinical judgment, diagnosis, you know, human empathy. Um, I don't know that AI is ready for those kinds of things. Um so yeah, that's where I think the that's where my mind goes, at least initially, when you're thinking about what can be augmented versus what what can be replaced. I don't see it as a replacement necessarily just yet.

SPEAKER_00

Yeah, it's a good call out. I don't expect my AI to be giving me a hug at the doctor's office anytime soon. Um robot hugs are creepy. Robot hugs. You don't know. Have you gotten one recently?

SPEAKER_01

I haven't. It's been a minute. Yeah.

Documentation Wins And Limits Of AI

SPEAKER_00

They're getting better every six months. Um one of the other things that came up um was really, and this is probably brought about by the history of EHRs, where you know, we went from paper charting and paper note-taking to having these electronic systems. And a lot of clinicians and healthcare workers I've talked to complained about how cumbersome certain EHRs, certain healthcare systems are to actually enter data into it. It creates more work for them than actually allowing. And I think there were certain stats around the amount of time a doctor spends on shift, you know, during a 10-hour shift, it'll be like three to four hours, will be actually charting and on the EHR as opposed to looking at customers, or sorry, looking into patients and engaging with people in the hospital. And um, do you see or have concerns around AI actually creating a burden for clinicians and healthcare workers versus actually taking away work?

SPEAKER_01

I think there's definitely that's something in that you need to watch out for. And we'll talk about it a little bit later in this podcast in terms of if you're in technology leadership in a hospital system, what are things you need to consider when bringing on AI? But I think definitely adding a system on top of an existing system, if it's done incorrectly, there's always risk for overburdening um the end user of that system. Um, you know, you still see doctors and nurses taking paper notes, not to mention what they're doing and charting. Um, and if you add AI and having talked to people in healthcare, the EHR rollouts aren't always met with a lot of training support. We saw that in our nursing survey, um, where better training on the systems would certainly ease burden. So if you're adding another layer but not doing all the things that you should be doing to making sure you're not adding complexity, I think you could end up in the same spot. I don't think you have to, but it's just something you have to be cognizant of.

SPEAKER_00

So yeah, I thought you were gonna say, well, doctors uh are gonna complain about having to remove all the m-dashes from uh from all the notes.

SPEAKER_01

That's also possible, yeah.

SPEAKER_00

Yeah, yeah, it's uh it just creates more work. Like I gotta get rid of like 30 or 40 m-dashes every time AI create something for me. Um I I think there's a good segue from your previous comment into like tools around, I guess what I've what's termed as shadow IT. Shadow AI. Um so shadow IT is typically an organization where somebody brings their own tools to their job to improve things. Um shadow AI is people bringing AI tools. I'm not sure if your doctor had a approved uh transcription app or if it's something he paid for, but there's always this um, you know, shadow AI or fear factor around, you know, if somebody's using something that's not been approved by the hospital, is it HIPAA compliant? Is my data being leaked somewhere? Um, for you, like what is most scary around shadow AI?

SPEAKER_01

Yeah, it's it's funny you mentioned, you know, did my doctor pay for that dictation tool or is it an extension of their existing AR, EHR? I did not ask that question, but it's the first thing that popped into my head um when his personal phone was put between the two of us. So when you think about what's scary in AI tools inside of healthcare, um obviously underpinning all of that is security and patient privacy and HIPAA uh compliance. If you're not using an approved tool, where is that data being stored? Also, if you're using an external source that's not being controlled by the hospital or doesn't have enterprise settings on it, what source data is a clinician or a caregiver reaching out to and asking for questions, asking for answers to questions? For example, you know, in my instance of one of my my AIs, I'll put in my information, but what's to stop a nurse or a clinician from copy-pasting my information into their own personal instance of whatever AI engine they prefer? So you have to have guardrails around that to make sure that, like any tool where the particularly one like this where it's so data heavy and so information rich, you have to have guardrails around specifically how that data is being used, how it's being stored, where are you sourcing it, all those kinds of things. That's to me is the scariest thing, at least initially.

SPEAKER_00

Yeah, it can be it is a little terrifying now that you say it out loud. Yeah. I know when we talk about this as shadow IT or shadow AI is happening in a lot of industries. So I know you know I've seen software developers that will use things that are approved. Our company has a specific policy on if you want to use anything, it needs to go through at least a um you know review review board. You submit a form, it goes to the lead architect. He you know reviews it. If it's not approved, then it goes up to the CIO. Um, how do you think about the balance of a healthcare organization maybe stuck on whatever they had two years ago, which may not be cutting edge, it may not be as useful as or helping people out versus the um you know people bringing whatever they want nilly-willy into the into the workplace and and trying to use it on their jobs and creating that risk that you articulated above.

Shadow AI Risks And HIPAA

SPEAKER_01

Yeah, I think the you know the first thing is to acknowledge that clinicians in healthcare are going to find workarounds and find tools to make their job um easier to perform. And I don't mean that in um a pejorative sense at all. They're problem solvers by nature. And so if a tool isn't available to them from their employer, they're going to look for another solution to solve that problem. And I think when you acknowledge that, there's a way to empower that curiosity and not necessarily run from like a blanket ban on AI isn't going to work. But coming up with a policy that allows people, if you are going to use AI, here are the three tools or one tool or whatever that we're using right now. Here's what's approved. Here's how we have enterprise settings set up so that our data is kept private from um the rest of the learn of the learning language model that is being trained, things like that, that I think can be really helpful to let people know that yes, this is out there. Yes, you should be using it. We acknowledge that you're going to be using it, but here's a way to do it safely so that you don't feel like you have to be operating in some sort of AI black market or in the shadows, I guess.

Policies That Enable Safe AI Use

SPEAKER_00

Yep. Yeah. Um, I'm gonna pivot into our our third third topic area, which is patient advocacy in the age of AI. This is something that I know you you and I both both talked about this, but the you know, there is this um balance of you know, doctors and nurses, you know, your doctor was historically your advocate. And we've sort of moved to this industrial healthcare complex that is the doctor is required to move very, very fast, get so many um you know, visits in a day, and doesn't always have time um to deal with patients. And you know, I can see in a world in the near future where you know some of the pieces are of what the doctor does right now or nurse does right now get replaced by AI or other technology. Um so in that case, who becomes the advocate? And then on the flip side, I know a lot of people will use, and I think we saw a speech with Rob Lower, he talked about this, using AI to as an advocate for yourself, where you're you're using it to bounce ideas because you don't have as much time with the doctors because the insurance requirements now, um, can you use AI to get a second opinion? Can you use AI to get more details on what your doctor or nurse told you? Um, things like this. And I think you and I both, you know, our our last sets of blood tests both went into our AI pretty quickly. Uh and we asked that, is there anything our uh the clinicians notes missed? Which, you know, something that I I feel like most people I talk to do that now. Um I've also done things around insurance claims where I get an insurance form, I'll put an AI and be like, please decipher this for me, because this is indecipherable. Um, how do you feel um is you what is it going to settle between the balance of, you know, we need to have advocates for patients in healthcare, which historically been doctors and nurses, versus like patients having to be advocates for themselves, which may not always be the right answers coming out of some of the AI systems.

Patient Advocacy With AI Tools

SPEAKER_01

Right. And you know, I think what's interesting is the current era we're in with the insurance-driven model has been that way long enough that I think there's a mind shift in the patient themselves that they realize they're having to advocate for themselves. Um I didn't think I'd bring up Rablo, in addition to you bringing up Roblo, but one of the things that he did mention in that speech is that you owe it to yourself to be your advocate, or finding someone within your family or within your personal care group to be your advocate. And I think ironically, part of who is going to be the advocate in a system where we're worried about AI um being dangerous is actually AI itself from a patient point of view. And what I mean by that could be, you know, what you just discussed with the blood work. Um, but also if you're going to see a doctor, if you know why you're going to see, like what the appointment's for, let's say it's for a kidney issue, for example, if there are things you're curious about, you can start using AI to list out questions. Like, what are the 10 questions I should be asking, asking my doctor about diet change because I'm in kidney failure, as an example. Um, things like that. I think those resources are much more easier to use with AI than they were, say, 10 years ago with like a WebMD where everything seemed like you were dying based on whatever you Googled. Um, so I think there is an opportunity to actually be your own best advocate with the help of AI.

SPEAKER_00

Do you do you think there's the correct description for this? Like a the thing I always have concerns on is do I trust this source? And I feel like that even if somebody else is using AI, do I trust what it's saying? Is it hallucinating? How accurate is this goes back to the data comment you made earlier, but how do you balance that I need something else to give me, be an advocate versus trust?

SPEAKER_01

Right. And I think all of it's a bit nuanced, but I think uh, you know, just I'll I'll use an example, like if I was going to the doctor um for a specific problem with my shoulder, for example, I'll take notes before I go and prep for the questions I think I want to ask. I can put that into AI and see if I missed anything. And then when you ask those questions of the doctor, there are ways to use the doctor to validate, um, as opposed to just relying on AI. I think, given that the doctors do have as many patients as they need to see in a given day, um the more directed you can be in your interaction with the doctor, I think is going to be that much more helpful. So I think you can use the doctor as your guardrails. Um, you could also use a couple different AI engines if you have that capability. Um, but just a little anecdotal study around using your time as wisely as you can with a doctor. I was at a doctor appointment yesterday, and there's an actual bell in the hallway that has um a rope hanging from it. And if the doctor is spending too long in one patient room, the uh his medical assistant will ring the bell and he will end the appointment and go to the next room. So your time is not being governed by you or even necessarily the doctor. So I think just as much prep as you can do and just be realistic around the fact that if you read something that doesn't sound exactly right, it's probably a good idea to ask for the opinion from the doctor. Did you did you hear the bell rung when you were there? The I was talking to the doctor and the bell rang, and he actually yelled out in the hallway. He's like, I'll be right there. It was really uh it it was interesting. That's the first time I've seen it, but you know, there's probably buzzers on people's legs we can't see that are out there in every hospital.

Trust Problems And Validation Tactics

SPEAKER_00

Yeah, trying to optimize it. Yeah. Um the last segment and final segment is the the role of the healthcare CIO in AI. And I think this is uh I don't envy a lot of the leadership in hospitals because there's a lot of pressures. And I think you talked about the insurance pressure, you talked about that the patient pressure, you talked about the I think you and I have talked previously about the just generally a lot of hospitals don't actually make as much money as people think. Um what I think what is the stat, like 50% of hospitals are at a loss or running at a loss, and a lot of those are because they're getting money from nonprofits that help fund them, um, but they're a a public service. And I you know, I think in the rural hospitals is definitely an issue too. I think we were talking about this last week on there was an article that came out that um was really good talking about the plight of the rural hospital and just its funding and how those hospitals are the cornerstones. But you know, when you think about being in the seat of a CIO for a hospital, how do you think about you know where do I go invest my money? Like if I have a certain buckets of money, you know, and I start looking at AI, what are the things that are actually going to create a meaningful difference quickly that they should go look at?

SPEAKER_01

Yeah, I think we've we've touched on some of those already. The documentation side, um, the revenue cycle side, um, even for staffing planning, I think um understanding like you could look at historic flows of patient volume, what it took to staff that what that cost, and then model out, you know, the next six months, twelve months, something like that based on historical data, I think could be really helpful to some of these people, particularly when they're trying to do more with less. Um so I think running away from AI is is a mistake, and I don't think hospitals are necessarily doing that. But the fact of the matter is, is anyone in a CIO CIO role now in a hospital has not spent their career as an AI expert because it's just too new. So I think understanding where you can pick and choose to actually test a before and after and make sure that it's benefiting you in a way that is helpful to your bottom line, I think is the way to at least start to address that problem. And start in small, just like anything else. Um, start with a small test case and then prove it and then work on a strategy to roll it out. I think if you try to do everything at once, it's it's gonna be a mess.

SPEAKER_00

On the flip side of that question, are there things that you've seen or heard about or observed where healthcare did AI wrong? And there were just like things where they just like, we rolled out the wrong things. Like, what have you seen there that might have been uh a mistake that people should look to avoid?

SPEAKER_01

And I think it's very similar to other deployments, and particularly when it comes to technology, is you don't buy a technology and look for problems to solve with that technology. Um it would be like buying a screwdriver and walking around your house for everything that the screwdriver fit. That's just not how you do it. Um So I think you need to look at it from the perspective of um identify a use case, make sure you're testing it, don't fall into the trap of premature deployment, over reliance on something that maybe is unproven, um, just trusting that it's working without going back and doing validation against uh whether it's a success message based on revenue cycles or on staffing efficiencies, and also just look at it from the perspective of you know, are you servicing all of your patients in an equitable manner? Based on what you're doing. Don't make it a premium service or things like that. I think that's how I would think about it, just to make sure that you don't have to start all in. Be deliberate about it and then focus.

What Hospital CIOs Should Pilot First

SPEAKER_00

Similar to everything else. Pick a pick a pilot or pick a POC, go after it, and go go solve a problem, solve next problem. Don't try to boil the ocean. One of the other things that's, you know, I think from the role of a CIO or even a CEO or CFO, we've seen a lot of other industries. I know this is, like I said, Q1 2026, or Q2. In Q1 of 26, we saw a lot of companies go do AI headcount reduction. So Amazon laid off thousands of middle management, and they said those roles can be replaced. I imagine in healthcare, there is a huge amount of overhead, especially to deal with insurance companies, to deal with all the coding and billing. Do you see a change in the amount of management and IT overhead, or even like uh middle management overhead in hospitals with the introduction of AI in the next few years?

SPEAKER_01

I think definitely that that's a big area for efficiency gains. Um I saw today that uh Oracle laid off 30,000 people. Um and I'm sure it's similar to some of the strategies that you know, Meta or Amazon have have put in place around replacing some of that middle management work. Um, you know, people don't think about the amount of overhead that goes into billing and insurance management with a hospital system. Um, you know, in addition to whatever the hospital has, there are um the insurance company has their side of it with their billing processes, and then you even have like middle consulting companies that go through and audit hospital systems to make sure they're getting reimbursed for everything that they should be from whatever program they're submitting that insurance claim through, be it private or public. So I think really having the ability to take all of those codes that get billed and understand um what you are entitled to with as a healthcare system, I think AI could definitely bring some efficiencies in that area, um, in addition to the charting mistakes and things like that from a comms perspective. I don't think that's going to remove staff necessarily, but those two things seem to make a lot of sense to me.

SPEAKER_00

Yeah, definitely. All right. Um so just to wrap up and summarize what we talked about. Yeah, AI is here. There's a lot of um features and capabilities. I think the things that we talked about started out talking about is just augmentation. So augmenting things like notes, um transcription, coding, a lot of that. How do you improve that? Where stuff that doctors and nurses don't want to do right now because there are a lot of overhead, because they take away from patient time. Um, how do you use AI to replace those? Um shadow IT or shadow AI, make sure you have processes to bring new AI technologies and make you know keep it, keep it out of the dark and actually have things where you get approval and have guardrails. Um patient advocacy, there, you know, it's gonna continue to happen. Patients are gonna use AI to try to improve their care. Uh, and at the same time, you know, make sure that that that's not something that they push away as a hospital or um a healthcare organization. And then the last one, you know, from that that CIO seat, um, yeah, change management is sort of what I took away from you. So you know, do POCs, you know, keep trying new things, start small, don't try to boil the ocean. Um, yeah, I think another thing is just go back and listen to your doctors and nurses. What do they want? What do they want to use, and just try to get their feedback on things as opposed to trying to roll things out with getting their input is really important. So um, I think with all these things, we're gonna continue to see more AI in healthcare. I I'm excited about it. I'm um a little scared at times, but I feel like it's getting um it's getting the attention it needs from security organizations and the the risk and compliance side as well. So um anything else, Lee, to add that I missed there?

Avoiding Premature AI Rollouts

SPEAKER_01

No, I think you summed it up well. It's not going away. And I think um there are enough groups paying attention to it on, I would say, both the provider side and on the patient advocacy side that I'm optimistic that from an augmentation perspective, that there could be some real advantages to at a at a large scale um with AI rollout.

SPEAKER_00

Excellent. Well, thank you for uh for joining me today and thanks everybody for listening along. If you have questions, uh feel free to visit us at bluefletch.com and uh reach out to us there, be our contact, or hit us up on LinkedIn. Um as always, we love to chat with folks and look forward to this uh this episode being redone in 2027 by the AI, Lee, and AI Brett with their thoughts.

SPEAKER_01

But uh they're much better looking too.

SPEAKER_00

They probably will be. Looks maxing, you know. The AI will be doing that.

SPEAKER_01

Yeah, exactly.

SPEAKER_00

Awesome. Thank you, Lee, and have a good one.

SPEAKER_01

Thanks.

SPEAKER_00

You too. Thank you for tuning in to the Frontline Mobility Edge. If you enjoyed this episode, make sure to subscribe for more content every month. If you'd like to learn more about Blue Fletch, check out the link in the description or visit us at bluefletch.com. See you next time.