Skip to main content

Table 9 Discourse on AI explainability, due diligence, and legal liability

From: On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments

Arguments

Rebuttals

IN FAVOUR of the necessity of explainability of cAI

 

“Opacity can be morally problematic in cases where a clinician violates due diligence, making treatment decisions based on an ML model’s prediction, while being in the dark about the underlying factors” [21]

“by referring to their professional duties, doctors have an argument to insist that technology producers develop explicable AI-systems to adequately fulfill their responsibility of avoiding harm.” [37]

“Given that clinical reasoning involves a range of tasks that cannot be deferred to AI systems, but must be undertaken by clinicians in collaboration with patients, it seems important to recognize clinicians as epistemologically responsible.

[…] given the epistemic responsibility of the clinicians curating the available information and best evidence about the patient, it only seems appropriate that they have the means to take on this responsibility. Being epistemically responsible, they must also be able to justify their reasoning and judgments. Requiring explanations of opaque AI output seems to support this.” [23]

“In daily life a sufficient explanation to a physician is an explanation that gives her enough justification to do or not do something

[…] accuracy should and does serve as a necessary and sufficient basis for responsible use of AI in [clinical decision support systems] by physicians.” [51]

IN FAVOUR of the necessity of explainability of cAI

 

“Physicians as domain experts should have access to the explanation why an algorithm reached a certain decision because the algorithm’s output justifies or contests their own decision” [52]

“a physician consulting a black box AI can find herself in a rationally irresolvable situation if the AI output contradicts her diagnosis.” [53]

“In the event of technology–physician disagreement, clinicians should be able to defend themselves” [54]

“if black box algorithms diagnose an illness and predicts which type of treatment would be most effective, the question what an acceptable and desirable way of acting is needs to be deliberated further based on this information, for which professional expertise and patient values are important” [41]

IN FAVOUR of the necessity of explainability of cAI

 

“confronted with a black-box system, clinical decision support might not enhance the capabilities of physicians, but rather limit them. Here, physicians might be forced into “defensive medicine” where they dogmatically follow the output of the machine to avoid being questioned or held accountable” [31]

“How can we then call for ultimate human responsibility, when we at the same time deprive a human operator from the epistemic means to live up to this responsibility?” [53]

“doctors are compelled, under negligence law, to exercise independent judgment and may disagree with the model

[…] not departing a wrong model prediction would breach the standard of care if, and only if, the reasons for departure were sufficiently obvious to a professional” [55]

IN FAVOUR of the necessity of explainability of cAI

 

“to provide patients with the most appropriate options to promote their health and wellbeing, physicians need to be able to use the full capabilities of the system. This implies that physicians have knowledge of the system beyond a robotic application in a certain clinical use case, allowing them to reflect on the system’s output.” [31]

“the use of the model should always only be part of a more comprehensive assessment, which includes and draws on medical experience” [55]

IN FAVOUR of the necessity of explainability of cAI

 

“If a clinician is subsequently sued because he or she accepted or rejected the model’s decision and the patient experienced resulting harm, a black box algorithm would compromise if not preclude the physician’s ability to defend him- or herself in court. […]

If physician defendants allege that the model’s (unexplained) output decision seemed reasonable and therefore it was reasonable to follow it, they would not be able to counter the plaintiff’s rejoinder: that the patient’s injury is proof positive that the model’s output was not reasonable and that the physician defendant was negligent in failing to reject it. […]

Physicians would therefore be exposed to significant liability for decisions that the technology precludes them from making but whose reasonability they cannot interrogate or, more importantly, justify.” [54]

“This claim is contestable, not only because physicians typically operate other technologies and machinery which they do not fully understand or cannot fully explain the inner working of (think of MRI scans, eg), yet they are sufficiently in control and understand enough of the workings to be considered responsible for operating these machines, including mistakes caused by these machines.

[…] for medical AI physicians can be responsible, in terms of accountability, for using these devices without fully knowing or understanding their inner workings

[…] responsibility can be ascribed to physicians when, under conditions of reliability, they were not morally justified in their actions.” [41]

IN FAVOUR of the necessity of explainability of cAI

 

“one might compare two scenarios in which an adverse patient event has occurred […] as a result of faulty ML ‘reasoning’ despite empirical validation […], with the only difference between the scenarios being the level of interpretability. […]

One could then ask the question of whether the […] level of interpretability impacts the degree to which the attending physician is accountable for the adverse patient event. […]

If this is the case, with the degree of interpretability constituting the only difference between these two scenarios, one must surely conclude that interpretability of ML models is relevant to accountability.” [29]

“In case AI use results in harm and the court proceedings are started, the courts will need to understand how technology functions, how and why a particular outcome was generated, whether the technology is defective, and who should be held liable for the harm caused. Technical AI Explainability will be arguably important in determining and allocating liability.” [30]

“Instead of relying on technical explanations generated by XAI, court experts are likely to need access to various parts of the module, such as algorithmic parameters, training information, validation information and outcomes, clinical testing information, regulatory approval details etc. Experts might need to conduct an independent validation/audit of the system in order to determine whether it has a specific defect that caused harm and who is responsible for the defect. Thus, instead of technical explainability, they will

require transparency around AI module

[…] the court experts will be invited to examine whether the AI development process met industry standards and legal regulations, and whether the AI manufacturer took all reasonable steps to avoid any harm and eliminate any possible errors/defects from software” [30]

“On the issue of explainability, when determining whether there is a breach of duty by the clinician, it may not be directly determinative whether the clinician knew precisely how a particular AI device functioned and how it arrived at its decision. As indicated previously, clinicians often work with complex technology that they do not understand, whether fully, partly or at all, and rely upon their outputs. […]

The focus is not on the clinician’s knowledge of the technology, but on their activities—whether they acted reasonably and with sufficient skill and care to prevent any possible harm.” [17]