Explore how Large Language Models (LLMs) are transforming recommender systems, enhancing user experience, and streamlining personalization. Delves into the integration of advanced AI, including ChatGPT and other LLMs, into recommendation engines. Discover how industry giants like Microsoft and Google are leveraging these technologies to redefine user engagement and business strategies. Join us on a journey through this technological evolution, where simplicity meets efficiency in building customized, user-centric recommendation models
Output the titles only. Do not incl ) # add to messages messages.append(prompt) # send to OpenAI res = chat(messages) print(res.content) Large Language Models for RecSystems 10 Blade Runner 2049 Ex Machina The Matrix In our next example, we'll take a different approach: instead of defining a specific theme, we'll integrate past user actions into the prompt. This method demonstrates how dynamically adapting recommendations based on user behavior can yield diverse and relevant suggestions # add latest AI response to messages messages.append(res) # now create a new user prompt prompt = HumanMessage( content=""" Last week I have watched the following movies "Shawshank Redemption", The Godfather Can you recommend me another movie for tonight? """ ) # add to messages messages.append(prompt) # send to chat-gpt res = chat(messages)
id: 33398a10957d76ba17be549cabdc9fa3 - page: 10
content) Based on your previous movie choices, here are three new movie recommendations for you: 1. "Pulp Fiction" 2. "Fight Club" 3. "Goodfellas"
id: 4cda7fcea19b6c4d6d2b67b95836cbbb - page: 11
Large Language Models for RecSystems 11 I find both of these responses quite impressive. It's particularly fascinating to consider how simply fine-tuning our prompts can significantly enhance the effectiveness of LLMs. This potential positions them as likely frontrunners in the realm of recommender systems in the near future. However, when it comes to deploying these models in a production environment, there are additional considerations to address. One key limitation is that LLMs typically base their recommendations on the data they were trained on, which may not directly align with the specific needs of your application. This is where the concept of Retrieval Augmented Generation (RAG) becomes crucial. RAG allows us to direct LLMs to make recommendations from a tailored dataset, effectively customizing their output to suit specific applications. To illustrate this, lets embark on a practical journey together. I will use again tools
id: cef25cca488f4a1ba0db5be75aad7057 - page: 11
This agent will be crafted to offer movie recommendations from a curated dataset of 1,000 movies. To begin, we'll process the descriptions of 1,000 movies using inference. The dataset used for this purpose can be found here. #Using text-embedding-ada-002 model from OpenAI to inference m from openai import OpenAI client = OpenAI() def get_embedding(text, model="text-embedding-ada-002"): text = text.replace("\n", " ") return client.embeddings.create(input = [text], model=mode from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() doc_result = embeddings.embed_documents(movie_data_demo.Descri Large Language Models for RecSystems 12 print(doc_result, len(doc_result)) movie_data_demo.Embeddings = doc_result The next step involves storing the generated vectors in a vector database. This is a key process in organizing our data for efficient retrieval and recommendation.
id: 819adf7bdf7ba1ec7289aa33ba440ca4 - page: 12