Interview with Michael Abramoff of IDx

Dr. Michael D. Abramoff, MD, PhD, is an ophthalmologist, computer scientist and entrepreneur. He is Founder and CEO of IDx, the first company ever to receive FDA clearance for an autonomous AI system. In this capacity, as an expert on AI in healthcare, he has been invited to brief the US Congress, the White House and the Federal Trade Commission. He continues to treat patients with retinal disease and trains medical students, residents, and fellows, as well as engineering graduate students at the University of Iowa. Read his full bio.

Interview with Michael Abramoff of IDx

Q: Artificial intelligence (AI) techniques have sent vast waves across healthcare, even fueling an active discussion of whether AI doctors will eventually replace human physicians in the future. Do you believe that human physicians will be replaced by machines in the foreseeable future? What are your thoughts?

A: There will always be a need for human physicians, if only for the often raised issue of the need for human interaction. Automation is best suited for narrow, well-defined diagnostic, therapeutic and surgical tasks. The more well-defined the task at hand in terms of scientific evidence, pathophysiological understanding, and cost-effectiveness, the easier it is to manage with automation. For example, let’s say you have one of these rare diseases that affects only 20 people in the world – it is hard to foresee a case for automating the diagnosis or management of this disease, except if there is a great amount of overlap with similar diseases. There are many such rare diseases that are best suited for diagnosis by a human who has more flexibility than a machine in terms of changing the diagnostic and therapeutic process based on new evidence.

Q: Can you provide some use cases that have already successfully demonstrate the value of AI/Machine Learning in healthcare?

A: In order for healthcare to truly unlock the value of AI and machine learning, we need to pursue autonomous use cases. We’ve seen many assistive use cases proposed since it’s an easier path toward implementation, but I am not fully convinced that assistive applications can have a tangible impact on healthcare. It is hard to prove their overall performance in the healthcare system given the high variability of the individual physicians that are being assisted. Autonomous AI is really the only way to truly move the needle for quality, affordability, and healthcare productivity. To date, IDx is the only company that has successfully implemented autonomous AI in a real world clinical setting.

Q: What areas in healthcare will benefit the most from AI/Machine Learning applications and when will that be?

A: We’ve seen a lot of AI being used to manage healthcare data and analyze patient records, which is great and needed, but the most exciting potential for AI lies in diagnostic and therapeutic medicine. We are likely to see more applications developed for prevalent diseases, where we understand the disease well and where there are well-defined and systematic approaches to treatment, like diabetic retinopathy. That time is now; it’s just a matter of applying the best practices we used to develop IDx-DR for other disease states.

Q: What are some of the challenges to realize AI/Machine learning in healthcare?

A: The business model, the use case, and safe and ethical implementation. The technology is there already, but how do you translate it into practice? Not all use cases make much sense from a clinical, cost-effectiveness, or business model standpoint. And, how do you safely implement?

Q: How close are we with successfully using AI for the purpose of mining big data?

A: The question that needs to be answered first is what problem you are trying to solve: are we doing science and trying to find ideas for new hypotheses? Don’t forget that the replication crisis we are seeing right now in science just about everywhere, is due in great part to the greying of the zone between hypothesis generation – for which mining is appropriate – and hypothesis testing – for which mining is absolutely inappropriate and has had harmful effects. Or are we trying to build AI systems that need to be evaluated for their effectiveness?

Q: What is your outlook or vision for use of AI/Machine Learning in healthcare?

A: AI has the potential to transform the quality, accessibility, and affordability of healthcare. I imagine a time when people can complete much of their standard, routine healthcare needs by walking around the corner to their nearest retail clinic, where a large part of the diagnostic and therapeutic process is performed by AI, and the human interaction remains intact.

Q: If AI is not quite there yet, what is needed to get us there?

A: It is here now; there is currently an FDA-cleared autonomous AI in use (ours!). The opportunity is great, but the AI community needs to proceed carefully to avoid the pushback that we are seeing on autonomous vehicles. There are legal and ethical considerations, and while we need to avoid needless barriers in the way of progress, we absolutely need to put the patient’s safety first before implementing new AI.

Q: Is there anything you would like to share with the PMWC audience?

A: Transparency is key for those working in medical AI in order for patients and physicians to trust this technology. Being transparent about how the algorithms work, how they were developed, how they were evaluated, how the training data was obtained is essential. Explainability – the ability to understand how the algorithm made its decision – is part of that.