Paging Dr. Robot – a Q&A with Dr. Marvin Slepian on AI in health care

Image
In a study led by University of Arizona Health Sciences researchers, about half of the participants said they would rather have a human – not artificial intelligence – oversee their diagnosis and treatment.

In a study led by University of Arizona Health Sciences researchers, about half of the participants said they would rather have a human – not artificial intelligence – oversee their diagnosis and treatment.

Image
Dr. Marvin Slepian, Regents Professor of medicine, College of Medicine – Tucson

Dr. Marvin Slepian, Regents Professor of medicine, College of Medicine – Tucson

For many people of a certain generation, the first time they became concerned about how artificial intelligence might interfere with their lives was in 1968. That's when HAL 9000 told Dave Bowman, "I'm sorry, Dave. I'm afraid I can't do that," in response to a command to open the pod bay doors in the science fiction classic "2001: A Space Odyssey."

Fast forward 55 years and artificial intelligence is playing an increasing role in almost every industry. In the medical field, AI-powered treatment options are on the rise. But a study led by University of Arizona Health Sciences researchers found that, when it comes to diagnosis and treatment, about half of the participants said they preferred a human doctor over AI.

The research team, led by Dr. Marvin Slepian, Regents Professor of medicine at the College of Medicine – Tucson, found that most patients aren't convinced the diagnoses provided by AI are as trustworthy as those delivered by human medical professionals.

In this Q&A, Slepian, a cardiologist and member of the Sarver Heart Center and BIO5 Institute, talks about how AI is being used in the medical field now, how it may be used in the future, and the legal and ethical ramifications of the emerging technology.

How is AI already being used in an average visit with a health care professional?

There is a spectrum of use by different individuals. Many physicians use chat platforms as enhancements to their care. For example, I know some physicians who take a patient's history data and put it into a chat program to get it back in a more organized format. Some also use it to put together data, references and other information about a particular disease or condition to give to a patient.

Some medical practices use AI platforms to organize data about best practices and treatment of their patient base in general. They can see how many patients are on a certain diabetes medication or how many are taking a certain anti-hypertensive (blood pressure) medication and get information and insights about the results. So, they're using it for their own research and analytics.

Artificial intelligence-based analytic tools are also used in laboratory and imaging areas like radiology. For example, a radiologist who has been looking at X-rays all day may miss a little aspect of a certain shadow. Having an AI analytic tool to complement what the radiologist is seeing provides them with an independent read.

As AI technology continues to evolve, how might it be used in the future in a health care setting?

I think more advanced platforms will allow us to develop more endpoints as we gain information on a patient. I think of it in a multi-scale way. What I mean is, a physician talks one-on-one with a patient. Moving "up," that physician can collect information about that patient's family, demographic group and community. Moving "down" it will be possible to get information from the patient's organs or cells or molecules. AI platforms will increasingly allow us to make multi-scale connections using information from all of those levels.

Wearable technology will also continue to get better. They will have a wider range of accelerometers and gyroscopes. Instead of measuring basic movement, they're measuring total movement – pitch, yaw and roll, like an airplane. So, in addition to a patient telling me that they are fatigued and short of breath, the wearable is giving me these digital biomarkers, allowing me to dig deeper. As we increase our access to digital and chemical biomarkers, we get closer to true precision medicine and basing our treatment on each individual.

Now, will we have "Dr. Robot" as the future for how we take care of patients? Probably not, although that kind of technology could help doctors take care of patients in a super-remote area, a hazardous facility or even the moon.

What can you say to reassure people you surveyed who were less likely to want AI involved in their health care?

That's a very important question. There will always be individuals who are averse to this or have apprehension regarding this. That puts the onus on us as providers to understand their concern. Perhaps with some education, some of that can be overcome. But we need to accept that for many, it will not be overcome.

I think the most important thing is to reassure individuals that they ultimately have the choice about their treatment – the patient is ultimately the final decider about their care. For example, we may tell a patient, "It appears that you have this unusual variant of a condition. You fit within this general group. I know that from years of experience, through the value of AI and additional testing. I have looked at treatment options that AI has come up with and I agree, but you have the choice as far as what to do."

I think if you have that type of relationship, then AI is a tool. It's not running the show. Everybody is empowered by that.

You also have a law degree. What role does the legal world play in all of this?

There is concern with AI in the domain of misinformation and disinformation, since platforms rely on data pulled from the web. Is it pulling information that is inadvertently or intentionally misleading? The data it's pulling could run the spectrum from factual information to belief and opinion to outright fabrication. I am currently part of a research project running an experiment to see if we can feed certain information to an AI system like ChatGPT and get it to persuade itself to put it out as truthful. If you keep repeating something that is false and get a system like that to believe it and put it out, then the innocent bystander who picks up the information doesn't know how accurate the information is.  Our goal is to define the parameters of exactness and accuracy for generative AI like ChatGPT, to improve utility yet at the same time develop suggestions for guardrails balancing freedom of information versus mis- or disinformation.

Have you given any thought to the role AI can play in mental health care?

Absolutely. One of the big issues in mental health is that we don't have enough access to mental health care for patients. We don't have enough trained mental health professionals. For the more mild situations, I think the ability to use systems like chatbots could be a means of providing some companionship or a way to talk something out in a simple way. However, there would need to be safeguards built into a system like that to ensure that something like a suicide risk doesn't fall through the cracks.

Another use would be to have a mental health professional conducting a session with someone while an AI platform in the background detects key phrases, demeanor and behavior to augment the analysis of the therapist.

What role will AI play as the University trains students to work in a medical field that is going to contain this rapidly evolving technology?

AI is here to stay, so all students moving forward will need to become informed about what it is and what it does. They also need to understand that these tools are being constantly validated and improved, so there always has to be a certain element of skepticism and understanding that this augments care by human professionals. It doesn't replace it.

Many people in this field bemoaned early on that this would be the "beginning of the end" of education. But they have come to realize that this is just the beginning and we have to embrace this, to use AI as a tool to empower education and scholarship. It creates new opportunities for students and all of us in the academic world. These platforms can stimulate your thinking or see things that perhaps you hadn't thought of.

I think learning to incorporate AI into your practice will be very useful to medical professionals. So, medical students and faculty need to get on board with this and incorporate it into their research, teaching and eventually their practice, regardless of their field.

Resources for the Media