Blog

AI in the Healthcare Industry: Balancing Risk and Reward

Listen to our recent webinar that discusses where we are with AI, concerns patients have, and where AI in healthcare is heading.

Are the rewards of using artificial intelligence (AI) in the healthcare industry worth the inherent risks?

Whether you find yourself on one side of this issue or on the fence, we encourage you to listen to our recent webinar that discussed this timely topic about where we are with AI, concerns patients have, and where AI in healthcare is heading. Read the takeaways in this blog or watch the full webinar on demand below.

 

What Is the Current State of AI in Healthcare?

Pew Research polls show that 60% of the American public does not want their healthcare providers to rely on AI for their treatments and diagnoses. However, there are some misconceptions about what AI is in the medical space. Let’s go into what AI is and what it isn’t.

There’s a misconception that the AI being used in healthcare is generative AI or is operating on people with scary-looking robotic mechanical arms. In reality, the robotic surgical arms are more of a high-tech scalpel being directed by skilled surgeons; and generative AI is not being used for diagnosis as it can suffer from “hallucinations.” Instead, machine learning AI is currently being used for paperwork and tedious, error-prone work such as adverse event prediction, operating room scheduling and prior authorization tasks.

Research shows that after leaving the doctor's office, patients tend to forget about 40%–80% of what their physician says. AI recordings enable patients to listen to playbacks of their visit through their electronic chart. Healthcare professionals can also transcribe physician notes, allowing the physician to review the notes, add comments, and make sure everything is accurate. At a time of healthcare provider shortages in the healthcare industry, this is especially useful as AI can save healthcare providers about two hours of paperwork each day.

AI can also be useful for identifying patterns. For instance, the Food and Drug Administration (FDA) recently approved an AI algorithm that quickly scans large amounts of medical images for cell clusters or subtle patterns. This reduces radiologist reading time and cognitive load by quickly identifying regions of interest or triaging images with abnormal findings that could be indicative of cancer.

In general, medical professionals are being conservative and methodical about how they use AI, especially generative AI, in clinical settings due to regulatory uncertainty, patient safety concerns, and litigation.

What Is the Future of AI in Healthcare?

We're on the cutting edge of technology, where everyone thinks about the potential of generative AI becoming like it is in "The Terminator." Hollywood often portrays it as the end of the world.

We're currently in AI's infancy stages, and the medical community will continue to play it safe, relying on AI not to its full potential but as a tool to help physicians save time and process data quickly. Doctors likely won’t be ordering a plethora of expensive tests just because AI told them to do so.

When we look at artificial intelligence in cars, like autonomous and semi-autonomous vehicles, the driver/designer of the car is still responsible for what the car does. We live in a very litigious society, where medical malpractice claims drive insurance premiums. As AI becomes more prevalent, we will still need to put up guardrails for physicians or surgeons—rather than the AI machine—for they will be ultimately responsible for the care of their patients.

Privacy and Risk Management Strategies for AI

When using AI, medical professionals input large data sets and intellectual property into the system—data specific to the hospital or the medical record company. Patient privacy and their obligations under HIPAA are the largest driving exposures, at least for now, in terms of deploying AI so it’s essential to have the proper protocols and protections in place. Therefore, it’s important to consider these questions when implementing AI: what information does it use to run algorithms, who designs the algorithm, and what does the landscape of the data look like.

AI assists in the speed and accuracy of patient charting and documentation. As those in the litigation world say, “If it's not charted, it didn't happen.” Now patients and healthcare providers can listen to doctor-patient transcripts and click on any areas they have questions about, which may help in the defense of healthcare providers in our litigious landscape.

To keep patients safe and to minimize employers’ potential exposure to AI-related claims, here are a few risk management strategies you can take:

  • Ensure the vendor providing the AI platform can secure the data under HIPAA's privacy and security rules.
  • Secure and segment the model so you're not committing unintentional HIPAA violations or privacy breaches by feeding in an individual patient's healthcare information.
  • Use ChatGPT wisely. It’s not a private platform. The information you enter informs and educates that model, and there's an argument to be made that this is a violation of the data subject's privacy.
  • Plan for the availability of artificial intelligence, just like you would for any other electronic healthcare records system. For example, do you have a backup plan to function without internet connectivity or power?
  • Educate your workforce on what they can and can't do with the AI tool, reminding them that humans are still responsible for the outcome of the care.

For more on AI and its future in the healthcare industry, watch the entire webinar.

Share

Authors

Table of Contents