Created at 10pm, Apr 16
buaziziArtificial Intelligence
0
Causability and explainability of artificial intelligence in medicine
ubVtoz3Pa4TKGzc3lvIjNdW31DIHPgDU4PC-QsWs6uU
File Type
PDF
Entry Count
94
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Artificial intelligence (AI)is perhaps the oldest field of computer science and very broad, dealing with all aspects of mimick-ing cognitive functions for real-world problem solving and building systems that learn and think like people. Therefore, it isoften called machine intelligence (Poole, Mackworth, & Goebel, 1998) to contrast it to human intelligence (Russell & Norvig,2010). The field revolved around the intersection of cognitive science and computer science (Tenenbaum, Kemp, Griffiths, &Goodman, 2011). AI now raises enormous interest due to the practical successes in machine learning (ML). In AI there wasalways a strong linkage to explainability, and an early example is the Advice Taker proposed by McCarthy in 1958 as a“pro-gram with common sense”(McCarthy, 1960). It was probably the first time proposing common sense reasoning abilities asthekeyto AI. Recent research emphasizes more and more that AI systems should be able to build causal models of the worldthat support explanation and understanding, rather than merely solving pattern recognition problems (Lake, Ullman, Tenen-baum, & Gershman, 2017).

One example is the generative adversarial network (GAN) introduced by Goodfellow et al. (2014). It consists of two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than from G. The training procedure for G is to maximize the probability of D making an errorwhich works like a minimax (minimizing a possible loss for a worst case maximum loss) two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere; in the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation.
id: dd292fad55bb66e6b4e6dd831176ab4a - page: 7
To learn the generator's distribution pg over data x, a prior must be defined on the input noise variables pz(z), and then a mapping to the data space as G(z; g), where G is a differentiable function represented by a multilayer perceptron with parameters g. The second multilayer perceptron D(x; d) outputs a single scalar. D(x) represents the probability that x came from the data rather than pg. D can be trained to maximize the probability of assigning the correct label to both training examples and samples from G. Simultaneously G can be trained to minimize log(1 D(G(z))); in other words, D and G play the following two-player minimax game with value function V(G, D): (cid:3): (cid:3) + Ez(cid:2)pz z log 1 D G z x log D x V D, G = Ex(cid:2)pdata 1 max D
id: 3dbe2068d07c27c0160f7774da965934 - page: 7
The optimization problem is redefined as: zk k2, 2 log p cjg z max z2Z where the first term is a composition of the newly introduced decoder and the original classifier, and where the second term is an 2-norm regularizer in the code space. Once a solution z? to the optimization problem is found, the prototype for c is obtained by decoding the solution, that is, x? = g(z?). The 2-norm regularizer in the input space can be understood in the context of image data as favoring gray-looking images. The effect of the 2-norm regularizer in the code space can instead be understood as encouraging codes that have high probability. High probability codes do not necessarily map to high density regions of the input space; for more details refer to the excellent tutorial given by Montavon et al. (2017).
id: d5a9df408c0c6b0b9c1444665d47b247 - page: 7
3.2 | Example use-case histopathology This section demonstrates the complexity of explanations of a human pathologist. For the following diagnosis 19424795, 2019, 4, Downloaded from by Turkey Cochrane Evidence Aid, Wiley Online Library on [16/04/2024]. See the Terms and Conditions ( on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 8 of 13 HOLZINGER ET AL. Bile ducts Portal vein Portal track Artery Erythrocyte Kupffer cell Hepatocyte nucleus Sinusoid endothelial cell nucleus Hepatocyte large lipid droplet Features in a histology slide annotated by a human expert pathologist FIGURE 3 Steatohepatitis with mild portal and incomplete-septal fibrosis and mild centrilobular fibrosis of chicken wire type. The morphological picture corresponds to alcoholic steatohepatitis. History of alcohol abuse?
id: cfd832567a27543aa6fbf5b270b2c35a - page: 7
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "ubVtoz3Pa4TKGzc3lvIjNdW31DIHPgDU4PC-QsWs6uU", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "ubVtoz3Pa4TKGzc3lvIjNdW31DIHPgDU4PC-QsWs6uU", "level": 2}'