Ethics for AI in medicine | A social perspective

AI in medicine

Ethics for AI in medicine

Artificial Intelligence (AI) and Machine Learning have taken center stage in many new forums and current development of human society. Whether it’s self-driving Tesla or ingestible robots, the society has turned its focus on this new baby. AI in medicine has been a hotly debated topic for quite some time now.

Prominent scientists all over the world have seriously questioned the introduction of robot ethics in various fields given its rising prominence in the present decade. It is not the concern of “robots” taking over power over human activities, but rather a general thinking that “robots” should be programmed to think for themselves. Not only the tech world but those outside Silicon Valley has also shown significant interest in AI to develop it in an ethical, socially responsible way.

One of the major fields in which AI has already begun to show its true impact is the health sector. From medical diagnosis to assistant robots, AI has displayed substantial consequences in this area. To put it in Nicholson Pierce’s words in his excellent piece Black Box Medicine, “Medicine already does and increasingly will use the combination of large-scale, high-quality data sets with sophisticated predictive algorithms to identify and use implicit, complex connections between multiple patient characteristics.”

In his article, Pierce has pointed out the significant changes bound to occur when Big Data meets Big Health, which is hailed to be the major leap forward in the health sector, although it is not without its set of troubles.

It raises tremendous challenges and questions for our current policy landscape. Although Big Data offers an immense number of benefits in addition to long-term benefits, development and implementation have proved to be a major hurdle in the current scenario. One of the significant question raised is regarding privacy, regulation, and commercialization. The current innovation policy lacks the necessary incentives required for proper implementation, and needs a major makeover, regarding new ethics and rules to impose and integrate AI in medicine.

Recommended for you
Big data against cancer: 3 best initiatives
How simple tricks can fool AI? 
Artificial Intelligence for beginners

Problems faced while introducing AI in medicine

The introduction of AI in medical diagnosis and decision- making, while it may accelerate the entire process without the unfortunate event of misdiagnosis among other medical errors, the implementation is not without its set of trouble. As we shift pieces of the decision-making process to an algorithm, we inadvertently rely more on Artificial Intelligence and Machine Learning.

This complicates matters when we want to verify malpractices claims if the improper treatment is a result of an error in the algorithm. Simply put, medical malpractices system in the US holds medical professionals liable to consequences if the care provided to patients gets deviated from usual standard so much to result in reckless and negligent medical practices.

Please note that the earlier system relies on the fact that the practitioner is the trusted expert, and relies on the fact that major part of the diagnosis depends on the individual decision of the treating physician, and he/she shall be held responsible if the care provided is not up to the mark.

And then the question arises who shall be made liable if the error committed by the doctor was the result of an algorithmic glitch in the AI diagnostic tool?

And considering the vast expanse of data conglomerated and rapid rise in efficiency and accuracy of a diagnostic tool or Artificial Intelligence compared to a physician in coming years, would it be correct to blame the doctor for a misdiagnosis? Since it would be strategic to go by the AI’s decision, it would be wrong if the doctor is held responsible for a mistake algorithm contained. Traditional malpractice notions of physician negligence and recklessness may become harder to apply.

We know that medical malpractice laws exist to facilitate patients and protect them from further harm, and since the advent of algorithms playing a significant role in decision-making, traditional laws fail to include these developments. This requires the major upheaval in ethics as well, and we shall realize this is in fact blessing in disguise.

But how?

According to the study conducted by researchers at Northwestern University, strict malpractice liability laws don’t necessarily correlate with better outcomes for patients. It also suggested that strict medical malpractice laws and heightened liability don’t significantly influence treatment in ways that keep patients safer.

On the other hand, implementing AI and machine learning to alleviate malpractices liability away from medical practitioners, in fact, helps the health care by mitigating the as-yet-intractable problem of overspending on care.

Health companies like HealthTap have already announced an AI-powered, consumer-facing, diagnostic app that patients can download onto their phones called Dr. AI.

Hopefully, in not so far future, AI shall pave a path, glorious and revolutionary, taking this world by storm by its unconventional and nearly sci-fi means through clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues.

Image credit:


Leave a reply

Your email address will not be published. Required fields are marked *


© 2018 Dr. Hempel Digital Health Network

Dr. Hempel Digital Health Network is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to


Log in with your credentials

Forgot your details?