Created at 5pm, Jan 29
t2ruvaArtificial Intelligence
0
Artificial Intelligence
fs8QJMDor1cWFxDDi_KuS7dnrhuw4AfPEpqSyp0vX5Q
File Type
PDF
Entry Count
470
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Artificial Intelligence

Philosophy of Artificial Intelligence 77 this new wrinkle, but to their surprise, their enhanced model doesnt work. No matter how they try, you can still prove them wrong by picking the other color. So how did you show them up? By expanding the box between the inside and outside of your thoughts in this case, to include them. In short, if the box is big enough, whats inside it cannot in all circumstances predict what it will do, even though something completely outside the box can (in principle, as far as we know). As long as you can enlarge the box to include the prediction, no such prediction can always be correct.
id: 2b658aa4fbfc537caebaed5df6410929 - page: 94
Now, theres nothing in this argument that cant apply as well to a machine as to you. We can build a robot that does exactly what you did. No matter how we program that robot to make decisions, no matter how predictable that robot is, as long as it has access to an outside forecast of its own actions, that forecast cant always be correct. The robot can simply wait for that forecast, then do the opposite. So a sufficiently capable robot cant always be predicted, where sufficiently capable means it has access to the attempt to predict what it will do.
id: 609e1d94ce5dcc977b1d29ae5b53292f - page: 94
This is an example of what computer scientists call an undecidable problem there is no effective algorithm that can solve the problem completely (meaning that it gives a correct answer in all cases). Note that this is an entirely different concept from the more widely known and similarly named uncertainty principle in physics, which states that your knowledge of both the position and momentum of a particle is limited in precision and inversely related. Undecidable problems really do exist. Probably the most famous one was formulated by none other than Alan Turing, and it is called the halting problem. The halting problem is easy to state: can you write a program A that will examine any other program B along with its input and tell you whether or not B will eventually stop running? In other words, can A tell if B will ever finish and produce an answer? Turing showed that no such program A can exist, using an argument similar to the one above.11
id: ee30124b4909941e847f094fb643667e - page: 94
78 ARTIFICIAL INTELLIGENCE So in practice, what actually happens? The program doesnt make a mistake that is, give you a wrong answer. Instead, it simply never stops running. In the case of our future scientists, no matter how clever their prediction process, in some cases it will simply never reach a conclusion as to whether you are going to pick red or blue. This doesnt mean you dont get to pick your answer, just that they cant always tell in advance what you are going to pick. The scientists might cry foul, noting that they are never wrong, which is true. But you counter that never being wrong is not the same thing as being able to reliably predict your behavior.
id: 569df3ad5fedd8edd14d368209a8a5c8 - page: 95
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "fs8QJMDor1cWFxDDi_KuS7dnrhuw4AfPEpqSyp0vX5Q", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "fs8QJMDor1cWFxDDi_KuS7dnrhuw4AfPEpqSyp0vX5Q", "level": 2}'