Discover how you can utilize LLMs in recommender systems.
append(prompt) # send to OpenAI res = chat(messages) print(res.content) Blade Runner 2049 Ex Machina The Matrix Lets try another example without specifying a theme but feeding previous user action. # add latest AI response to messages messages.append(res) Large Language Models for RecSystems 4 # now create a new user prompt prompt = HumanMessage( content=""" Last week I have watched the following movies: "Shawshank Redemption", The Godfather Can you recommend me another movie for tonight? """ ) # add to messages messages.append(prompt) # send to chat-gpt res = chat(messages)
id: 79b81b5a7c677d704ddc8c920a5235ed - page: 4
content) Based on your previous movie choices, here are three new movie recommendations for you: 1. "Pulp Fiction" 2. "Fight Club" 3. "Goodfellas" I think both answers are pretty satisfying. Especially if we think that we can further improve results just by fine tuning our prompts, we can say that LLMs themselves might be the most popular recommender systems of the near future. However, this wont be enough for a production level implementation. Since LLMs are recommending items they knew from their training period, it is not applicable for your application by default. Again retrieval augmented generation(RAG) comes to the scene. With the help of RAG, it is possible to guide LLMs to recommend movies from your own data set. Now, lets try it together. Here I am going to use Langchain, Pinecone and OpenAI to build a specified chat agent that serves movie recommendations from a dataset of 1000 movies.
id: f8eeafdfd3a2e27964e88bf611071ca7 - page: 5
First, we need to inference the descriptions of 1000 movies. Here you can find the dataset I used. #Using text-embedding-ada-002 model from OpenAI to inference movie descriptions from openai import OpenAI client = OpenAI() def get_embedding(text, model="text-embedding-ada-002"): text = text.replace("\n", " ") return client.embeddings.create(input = [text], model=model).data.embedding from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() doc_result = embeddings.embed_documents(movie_data_demo.Description) print(doc_result, len(doc_result)) movie_data_demo.Embeddings = doc_result Secon step is to store these vectors in a vector database. import numpy as np shape = np.array(doc_result).shape shape /// pinecone.init( PINECONE_KEY, environment='us-east-1-aws') index_name = 'movie-demo' # if the index does not exist, we create it Large Language Models for RecSystems 5 pinecone.create_index( index_name, dimension=shape,
id: 7295ced3e2983fd2a8cb5a8cef70461f - page: 5
cosine') # connect to index index = pinecone.Index(index_name) Preparing our vectors and metadata to upsert into our index. # Create an empty list for descriptions desc_list =[] # Iterate over each row for desc_index, rows in movie_data_demo.iterrows(): # Create list for the current row descriptions =str([rows.Description]) # append the list to the final list desc_list.append(descriptions) # Create an empty list for names name_list =[] # Iterate over each row for name_index, rows in movie_data_demo.iterrows(): # Create list for the current row names =str([rows.Name]) # append the list to the final list name_list.append(names) Now, we are ready to upsert what we have into our index on Pinecone. batch_size = 128 ids = [str(i) for i in range(shape)] # create list of metadata dictionaries
id: 5fc5edf822460949a6d8352d2dfc6472 - page: 6