Created at 2pm, Jan 12
omerArtificial Intelligence
1
Knowledge Interface Between Humans and AI
lGOpFEkMzs3Z5hNCTO73y4rEdv1DlM9xrPE3wlcAkV0
File Type
PDF
Entry Count
10
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Throughout history, people have constructed knowledge collaboratively as a public good, and the internet has made it accessible to everyone.Thanks to the Internet, we are approaching an era where knowledge doubles every 12 hours. For the sake of comparison, in 1900, human knowledge had been expanding roughly every century. By the end of 1945, this pace had accelerated to every 25 years.Wikipedia stands as a prime example of this collaborative development of knowledge. Now, in the era of AI, we are witnessing a paradigm shift.Developers' adoption of AI models grows daily, i.e., 250,000+ models on Hugging Face, fueling the demand for ever-evolving knowledge to be integrated with LLMs.Despite the exponential growth of AI, there remains a significant gap in accessible, collaborative platforms for knowledge sharing and retrieval.Training new LLMs or fine-tuning them with fresh knowledge is insufficient to keep LLMs up with the expanding human knowledge.The landscape of information has continually evolved. In the digital age, the internet revolutionized the dissemination of knowledge, transforming paper documents into web pages. Google emerged as a dominant force, indexing websites and curating content, establishing a well-understood digital creation and discovery cycle.Now, we stand on the brink of another transformative era with LLMs. LLMs are redefining the paradigms of information creation, access, and sharing. As we shift, traditional websites are transitioning to Public RAG models, where AI intricately interfaces with information.

Today, contributing global knowledge to LLMs is a fragmented and challenging process for most internet users, rendering it more accessible to AI developers rather than the billions of people using the internet. 3. unSAFE a. Siloed and centralized AI knowledge could pave the way for harmful entities to create manipulative one-toone persuasion strategies, making accountability challenging to ascertain. b. We are moving towards a world where various agents will continuously influence human judgment, making it easier to manipulate our decisions on critical issues such as elections, wars, and existential challenges. c. In the post-LLM era, AI-generated data is fed into the internet, contaminating the very source we use to train large language models. We need a collaborative knowledge platform for maintaining a clean data pool.
id: 7d33281fcd88fb4d420636187b95c3f5 - page: 3
Dria Dria serves as a collective memory for AGI, where information is stored permanently in a format understandable to both humans and AI, ensuring universal accessibility. Dria is a collective Knowledge Hub. Knowledge Hub consists of small-to-medium-sized public vector databases called knowledge. A knowledge can be created from a pdf file, a podcast, or a CSV file. Vector databases are multi-region and serverless. Dria is fully decentralized, and every index is available as a smart contract. Allowing permissionless access to knowledge without needing Dria's services. DRIA provides: An API for knowledge retrieval implementing search with natural language and query. A docker image for running local APIs without permission. Knowledge uploaded to Dria is public and permanent.
id: 67e2a37a83f5936f8f03572b6f369128 - page: 3
Paving the way for Human AGI Knowledge Interface: Core Solutions 1. Cost of Knowledge at Its Lowest: Dria modernizes AI interfacing by indexing and delivering the world's knowledge via LLMs. Dria's Public RAG Models Democratize knowledge access with cost-effective, shared RAG models. Today, Dria efficiently handles Wikipedia's entire 23GB database and its annual 56 billion traffic at just $258.391, a scale unattainable by other vector databases. Dria operates as a Decentralized Knowledge Hub serving multiple regions, offering natural language access and API integration. Dria supports multiple advanced indexing algorithms and embed models. This offers the flexibility to seamlessly switch between algorithms or embed models using the same data, ensuring consistently state-ofthe-art retrieval quality. 2. Contributing is easy and incentivized for everyone: Dria's zero technical mumbo jumbo approach allows everyone to contribute knowledge to LLMs:
id: 89032adae4b822a9d10a06ab271663b3 - page: 4
Dria's Drag & Drop Public RAG Model effortlessly transforms knowledge into a retrievable format with an intuitive drag-and-drop upload feature. As a permissionless and decentralized protocol, Dria creates an environment where knowledge uploaders can earn rewards for the value of their verifiable work: Users worldwide can contribute valuable knowledge with permissionless access to shared RAG knowledge for LLMs, applications, and open-source developers. If other participants query the knowledge and produce valuable insights into AI applications, the users will earn rewards for their verifiable contributions. Users can then use these rewards to upload or query more knowledge into the collective memory. 3. Safe, Trust Minimized, and Open Collaboration: Anyone can run RAG models locally through smart contracts, enabling permissionless access to world knowledge.
id: 48efbb2e7f5438c5331bfa086b6edf01 - page: 4
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "lGOpFEkMzs3Z5hNCTO73y4rEdv1DlM9xrPE3wlcAkV0", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "lGOpFEkMzs3Z5hNCTO73y4rEdv1DlM9xrPE3wlcAkV0", "level": 2}'