Created at 12pm, Mar 4
Ms-RAGArtificial Intelligence
1
Axe the X in XAI: A Plea for Understandable AI
ppzYK1O_r12D7fWKHUKktxZifMHNOR8-NrDz9Net29U
File Type
PDF
Entry Count
92
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Andrés PáezUniversidad de los Andesapaez@uniandes.edu.coTo appear in: Durán, J. M., & Pozzi, G. (Eds.), Philosophy of science for machinelearning: Core issues and new perspectives. Synthese Library. Cham: Springer.ABSTRACTIn a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use thelabel “understandable AI” to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent’s success in using the system, in drawing correct inferences from it.

g., soundness, completeness, compactness and comprehensibility (p. 36). I am highly skeptical that a formal definition of explanation in terms of necessary and sufficient conditions or properties can be found. Instead of trying to define explanation in ML, it would be more fruitful to think of the provision of understanding as the common element that defines, in a sense, what all XAI methods have in common. This allows a plurality of explanatory methods to flourish as long as they provide understanding to the systems users. A robust account of understanding in ML should provide an adequate grounding for all such methods. In this final section, I want to further develop the account of understanding in ML that I offered in my 2019 paper by adding the idea that understanding is a success concept in the sense explained below.
id: 8a83983ef4f4060cf76d52c74091a562 - page: 16
4.1 Understanding as a Success Concept In epistemology, understanding is often distinguished from knowledge.9 There are two main differences between these concepts. First, understanding is seen as a higher epistemic achievement than knowledge (Kvanvig, 2003; Pritchard, 2010). I can come to know that my coffee is cold just by sipping from the cup, while understanding why my 9 Needless to say, this opinion is not unanimous among philosophers. I cannot discuss all the arguments for and against this view in this chapter, but if my claim that there are non-factive paths to understanding in ML is correct, the absence of truth will prevent these cases of understanding from being reduced to some species of knowledge. 16 coffee is cold involves relating the coffees temperature to the laws of thermodynamics. Secondly, the objects of understanding are generally more complex and structured than the objects of knowledge (Zagzebski, 2019). We want to understand the stock market, the
id: 59be3d56657174ed5db9df31fbcdf06e - page: 16
Even when we try to understand something simple like my coffee getting cold, the fact must be inserted into a broader,
id: 2092dd92259d255465e4b426137c3661 - page: 17
Some philosophers have argued that understanding is simply knowledge of causes: if I know the cause of p, I understand why p (Lipton, 2004). Similarly, Khalifa (2017) argues that understanding is knowledge of explanations. De Regt offers the following example to show that knowing why p is not equivalent to understanding why p: Merely knowing that global warming is caused by the increase of CO2 in the atmosphere does not yet amount to understanding it. A student may be able to answer the question Why does global warming happen? correctly by answering Because of the increase of CO2 in the atmosphere. But this does not imply that she understands why global warming occurs she merely knows what its cause is. The student understands why global warming happens if she not only knows that it is caused by the increase of CO2, but also grasps the causal, explanatory relation between cause and effect. In
id: 53e5fdca0bdcbf4135c360896ccc29bf - page: 17
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "ppzYK1O_r12D7fWKHUKktxZifMHNOR8-NrDz9Net29U", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "ppzYK1O_r12D7fWKHUKktxZifMHNOR8-NrDz9Net29U", "level": 2}'