Arguments | Rebuttals |
---|---|
OPPOSED to the necessity of explainability of cAI | |
“Explanations from current XAI methods superficially represent the computational complexity that underlies a prediction” [28] “Extracting information from models which may have millions of parameters and presenting this information in a way understandable to the human mind is an inherently reductive process” [8] | “it can be argued by analogy that if idealized scientific models such as the ideal gas law can provide genuine explanations that enable people to better understand complex natural phenomena, then XAI methods can provide genuine explanations too.” [23] |
OPPOSED to the necessity of explainability of cAI | |
“An explanation that assumes a background in computer science, for instance, may be useful for the manufacturers and auditors of medical AI systems, but is likely to deliver next to no insight for a medical professional that lacks this technical background. Conversely, a simple explanation tailored to patients, who typically lack both medical and computer science backgrounds, is likely to provide little utility to a medical practitioner. […] post hoc explanation methods are not currently capable of meeting this challenge” [27] | “An explanation does not require knowing the flow of bits through an artificial intelligence system, no more than an explanation from humans requires knowing the flow of signals through human brain neurons” [9] |