Created at 1pm, Dec 29
firstbatchArtificial Intelligence
0
Upgrading Recommender Systems with Large Language Models
jbKsQ_J0bgXmvS7rlt4YiIa5h0B0qfpf0-xLvY35evc
File Type
PDF
Entry Count
29
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Explore how Large Language Models (LLMs) are transforming recommender systems, enhancing user experience, and streamlining personalization. Delves into the integration of advanced AI, including ChatGPT and other LLMs, into recommendation engines. Discover how industry giants like Microsoft and Google are leveraging these technologies to redefine user engagement and business strategies. Join us on a journey through this technological evolution, where simplicity meets efficiency in building customized, user-centric recommendation models

Upgrading Recommender Systems with Large Language Models 10 Based on your previous movie choices, here are three new movie
id: 6aa9c3585c501f71067c88eb51e4f985 - page: 10
"Pulp Fiction" 2. "Fight Club" 3. "Goodfellas" Both of these responses are quite impressive. It's particularly fascinating to consider how simply fine-tuning our prompts can significantly enhance the effectiveness of LLMs. This potential positions them as likely frontrunners in the realm of recommender systems in the near future. However, when it comes to deploying these models in a production environment, there are additional considerations to address. One key limitation is that LLMs typically base their recommendations on the data they were trained on, which may not directly align with the specific needs of your application. This is where the concept of Retrieval Augmented Generation (RAG) becomes crucial. RAG allows us to direct LLMs to make recommendations from a tailored dataset, effectively customizing their output to suit specific applications. To further demonstrate the practical application of these concepts, let's delve into
id: 59569a9f9a934600d2d6a0d0e01bffd8 - page: 11
For this, again I employed tools such as Langchain, Pinecone, and OpenAI to develop a unique chat agent. This agent is specifically designed to suggest movie recommendations from a handpicked selection of 1,000 titles. Here's how it operates: When you request movie recommendations, the agent initially employs Retrieval Augmented Generation (RAG) to interpret your prompt. This interpretation is key to identifying the most relevant movie candidates from our dataset, which we have previously uploaded to Pinecone in the form of vectorized data. The agent then presents these selected candidates to GPT, incorporating them as contextual input. This step is crucial as it allows the GPT model to generate personalized movie suggestions based on the context derived from your initial prompt. The result is a finely-tuned recommendation that reflects your specific preferences and interests
id: 718022933b1fe23239c2c2c0ee95e979 - page: 11
Upgrading Recommender Systems with Large Language Models 11 Let's see the results! #Instructing the LLM messages = [ SystemMessage(content="You are a movie recommender.") ] #Requesting for recommendations query = "Can you recommend me some drama movies?" augmented_prompt = f"""Answer the query only by referencing t Contexts: {source_knowledge}
id: e85fb96f41238457c89b57221f86bce4 - page: 11
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "jbKsQ_J0bgXmvS7rlt4YiIa5h0B0qfpf0-xLvY35evc", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "jbKsQ_J0bgXmvS7rlt4YiIa5h0B0qfpf0-xLvY35evc", "level": 2}'