Created at 4am, Jan 7
cyranodbArtificial Intelligence
0
Artificial Intelligence A Modern Approach by Stuart J. Russell and Peter Norvig
jslhZnhiYGrDBswfpjtt9YUdm1cA74VasUUKIpdKzcA
File Type
PDF
Entry Count
4667
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Artificial Intelligence (AI) is a big field, and this is a big book. We have tried to explore the full breadth of the field, which encompasses logic, probability, and continuous mathematics; perception, reasoning, learning, and action; and everything from microelectronic devices to robotic planetary explorers. The book is also big because we go into some depth. The subtitle of this book is “A Modern Approach.” The intended meaning of this rather empty phrase is that we have tried to synthesize what is now known into a common ramework, rather than trying to explain each subfield of AI in its own historical context. We apologize to those whose subfields are, as a result, less recognizable.

(See Exercise 15.12.)
id: 86b7f13ab5f66ead76dfa9a06c636b28 - page: 606
4.3 The general case The preceding derivation illustrates the key property of Gaussian distributions that allows Kalman ltering to work: the fact that the exponent is a quadratic form. This is true not just for the univariate case; the full multivariate Gaussian distribution has the form N (, )(x) = e 1 2 (x)(cid:4)1 (x) . 587 (15.20) 588 KALMAN GAIN MATRIX Chapter 15.
id: 926bcdbb7e672f6fc35deafcf169c74d - page: 606
Probabilistic Reasoning over Time Multiplying out the terms in the exponent makes it clear that the exponent is also a quadratic function of the values xi in x. As in the univariate case, the ltering update preserves the Gaussian nature of the state distribution. Let us rst dene the general temporal model used with Kalman ltering. Both the transition model and the sensor model allow for a linear transformation with additive Gaussian noise. Thus, we have P (xt+1 | xt) = N (Fxt, x)(xt+1) P (zt | xt) = N (Hxt, z)(zt) , (15.21) where F and x are matrices describing the linear transition model and transition noise covariance, and H and z are the corresponding matrices for the sensor model. Now the update equations for the mean and covariance, in their full, hairy horribleness, are t+1 = (I Kt+1H)(FtF(cid:12) + x) , t+1 = F t + Kt+1(zt+1 HF
id: b7d2caef403bee6b3ffd5aab0e0ed022 - page: 607
22) where Kt+1 = (FtF(cid:12) + x)H(cid:12)(H(FtF(cid:12) + x)H(cid:12) + z)1 is called the Kalman gain matrix. Believe it or not, these equations make some intuitive sense. For example, consider the update for the mean state estimate . The term Ft is the predicted state at t + 1, so HFt is the predicted observation. Therefore, the term zt+1 HFt represents the error in the predicted observation. This is multiplied by Kt+1 to correct the predicted state; hence, Kt+1 is a measure of how seriously to take the new observation relative to the prediction. As in Equation (15.20), we also have the property that the variance update is independent of the observations. The sequence of values for t and Kt can therefore be computed ofine, and the actual calculations required during online tracking are quite modest.
id: 00dc27061b590c3b0c8d60bde5dde1a4 - page: 607
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "jslhZnhiYGrDBswfpjtt9YUdm1cA74VasUUKIpdKzcA", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "jslhZnhiYGrDBswfpjtt9YUdm1cA74VasUUKIpdKzcA", "level": 2}'