Arguments | Rebuttals |
---|---|
IN FAVOUR of the necessity of explainability of cAI | |
“[Explainability is] necessitated by the principle of justice, which requires patients to be allowed to understand and appeal against healthcare outcomes on a fair and equal basis” [43] “patients may claim that they are being discriminated against when they are not given similar opportunities to clear their doubts compared to others.” [37] | “Patients have traditionally had a right to know what technology will be used in diagnostic processes, what are the benefits and risks, as well as financial implications of technology, which could be defined as ‘transparency’ around technology, but not how exactly technology functions (explainability). The same standards should apply with relation to AI technologies.” [30] |
OPPOSED to the necessity of explainability of cAI | |
“public reason standards required for health justice never necessitated full transparency in how medical tools work. They required good reasons for decisions and opportunities to challenge them, which can be and are often provided without tools being explainable. Legal mechanisms for evaluating AI tools present numerous opportunities to assess performance, costs, and reasons for adoption and a framework for assessing accuracy and justifiability” [32] | “whereas accuracy is mainly relevant from outcome-oriented stances, explainability is a requirement for procedural fairness accounts […] One area where the inexplicability of AI is of particular concern: the allocation of scarce medical resources […] Accountability for reasonableness—which remarks that fair processes need transparency, publicity on rationales, and open mechanisms to revise the decisions— can be applied to XAI and distributive justice in medicine” [44] |
IN FAVOUR of the necessity of explainability of cAI | |
“The medical records of some of the most vulnerable groups, especially from technologically underdeveloped territories, might be poorly collected or digitized, thus resulting in sample size disparity. Therefore, available raw data may reflect and expand existing bias and, in turn, unfairly affect members of protected groups based on sensitive categories like gender, race, age, sexual orientation, ability, or belief” [9] “The problem of bias, nevertheless, is not solved by simply trying to assess algorithmic performance across diverse demographics. Technology-centred solutions are limited when they neglect that biases are also a sociopolitical issue related to underlying health inequities in society. Biases can surreptitiously lead to favouring or disadvantaging particular social groups in contexts of historical discrimination, which can lead AI to reproduce societal prejudices and systemic inequalities, or even reinforce discriminatory practices. An opaque or unexplainable procedure prevents the verification of whether the decision is free from inappropriate considerations and unethical biases” [44] |