Created at 6am, Apr 5
Ms-RAGArtificial Intelligence
0
Comprehensible Artificial Intelligence on Knowledge Graphs: A Survey.
nfFjiMlLHlaXR3qTWSCn8wE4KSP7NCWsLkg00hHa-LY
File Type
PDF
Entry Count
147
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Simon Schramma,b,1, Christoph Wehnera,1, Ute Schmidaa- Cognitive Systems Group, University of Bamberg, An der Weberei 5, Bamberg, 96049, Bavaria, Germanyb- BMW Group, Petuelring 130, Munich, 80809, Bavaria, GermanyAbstractArtificial Intelligence applications gradually move outside the safe walls of research labs and invade our daily lives. This is also true for Machine Learning methods on Knowledge Graphs, which has led to a steady increase in their application since the beginning of the 21st century. However, in many applications, users require an explanation of the AI’s decision. This led to increased demand for Comprehensible Artificial Intelligence. Knowledge Graphs epitomize fertile soil for Comprehensible Artificial Intelligence, due to their ability to display connected data, i.e. knowledge, in a human- as well as machine-readable way. This survey gives a short history of Comprehensible Artificial Intelligence on Knowledge Graphs. Furthermore, we contribute by arguing that the concept of Explainable Artificial Intelligence is overloaded and overlaps with Interpretable Machine Learning. By introducing the parent concept Comprehensible Artificial Intelligence, we provide a clear-cut distinction of both concepts while accounting for their similarities.Thus, we provide in this survey a case for Comprehensible Artificial Intelligence on Knowledge Graphs consisting of Interpretable Machine Learning and Explainable Artificial Intelligence on Knowledge Graphs. This leads to the introduction of a novel taxonomy for Comprehensible Artificial Intelligence on Knowledge Graphs. In addition, a comprehensive overview of the research in this research field is presented and put into the context of the taxonomy.Finally, research gaps in the field are identified for future research.

Even though the authors use a super-ordinate graph for generating
id: 18b96f19642c5f1c439bb21337e7fa6e - page: 13
GraphSHAP however, introduced by Perotti et al. jointly makes use of the Shapely value, but on a GC level. The system decomposes the graph into motifs, whereby motifs are recurring patterns in the KG. For instance, a motif could be a triangle, a four-clique, or a cycle. Thus, GraphSHAP uses Shapely values to calculate the contribution of each motif to the graph classifiers output. Similarly, Schlichtkrull et al. learn a straightforward classifier that predicts if an edge can be neglected or not, given its contribution to the prediction. For every node in every layer of a given, trained GNN, the system learns a classifier that predicts if that edge can be dropped. The authors claim to train the classifier in a fully differentiable fashion, employing stochastic gates and encouraging sparsity through the expected L0 norm, which makes the classifier also interpretable attribution method to analyze G
id: 6f74dbf231d05b99ac7f83dc11f6b365 - page: 13
3.2.4. Graph generation methods Some methods focus on training a graph generator instead of optimizing the input KG directly. For GC tasks, Yuan et al. create a XGNN, a method to explain GNN predictions on the model-level. The authors train a separate graph generator in a RL setting that predicts a suitable way to add a new edge to the input graph. The generator produces a perturbation of the input graph, whereby changes are made in a way that af13 Figure 9: Generic components of GraphLIME, taken from Huang et al. . The system samples neighbors of the node that is to be explained (in red) and vertices in green, which are in the proximity of the vertex that is to be explained (a). From the feature vectors in red, the explanatory parts are determined (b), and presented to the user (c).
id: b8aa220a3861eb9c420707eab299c377 - page: 13
The RL policy works with manually generate graph rules that eventually explain the contributions of both nodes and edges. XGNN is appropriate for both graphs and KGs. Yet another modelagnostic but symbolic post-hoc explanation method for graph classifiers was proposed by Abrate and Bonchi in 2021. The authors construct a counterfactual graph, which can, compared to Shapely values, provide a deterministic explanation of why a classifier made a prediction. The method proposed by the authors is a heuristic search scheme that can be conducted in two variants: (1) Oblivious, the algorithm searches for the optimal counterfactual graph by querying the black-box graph classifier and abstracting the original graph. (2) Data-driven, the algorithm gains additional access to a subset, superset or the original training dataset of the black box classifier. Similar to SHAP and LIME , the authors rank statistics about the input data in order to achieve exp
id: bce7afd7a38d7de254e6046e4b71209e - page: 14
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "nfFjiMlLHlaXR3qTWSCn8wE4KSP7NCWsLkg00hHa-LY", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "nfFjiMlLHlaXR3qTWSCn8wE4KSP7NCWsLkg00hHa-LY", "level": 2}'