Skip to main content

Table 8 Discourse on AI explainability and clinician and/or patient trust

From: On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments

Arguments

Rebuttals

IN FAVOUR of the necessity of explainability of cAI

 

“how can we trust our health, let al.one our very lives, to decisions whose pathways are unknown and impenetrable? Indeed, without established trust, a patient may have little or no incentive to seek the advice of a physician or share sensitive clinical information, which is required by the artificial intelligence algorithms for diagnostic purposes” [9]

“explainability is an instrumental means of establishing and maintaining trust and control, but is not a critical end in and of itself” [8]

“a mechanistic understanding of how an intervention works is not necessary for either trust or transparency” (Bradshaw T.J. et al., 2023)

OPPOSED to the necessity of explainability of cAI

 

“clinicians need transparency around the technology they use to ensure certain levels of trust. However, clinicians do not necessarily need an in-depth explanation of how each AI recommendation or outcome is generated, if they are comfortably satisfied that the technology is accurate and reliable, they being the most important factors in ensuring trustworthiness.” [17]

“patients trust technology if their doctors recommend it. The concepts of trust and delegation are inherent to this market.” [30]

“Unfortunately, trust is not something that is so easily transferred. We can easily imagine a patient who trusts the professional in most circumstances but fails to trust them whenever they outsource part of the decision-making process to an AI system.” [48]

“patients rely on the clinician’s ability to understand and convey […] explanations in a way that is accurate and understandable” [31]