Created at 1pm, Dec 29
benjaminOther
0
DRDT: DYNAMIC REFLECTION WITH DIVERGENT THINKING FOR LLM-BASED SEQUENTIAL RECOMMEN- DATION
qZ_KNOeymhzXtWOCYKwLPkN6sKs6ONtMHb9MT07BK0Q
File Type
PDF
Entry Count
96
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw
4.4 DYNAMIC REFLECTION WITH USER FEEDBACK (DR)
id: e2e86c5a40d80024e217ca76187b5955 - page: 7
Addressing the temporal evolution of user interests is a crucial yet challenging aspect of sequential recommendation, especially when using LLMs. This task is more complex than standard NLP tasks like QA or sentiment analysis. In sequential recommendation, understanding user preferences goes beyond merely arranging behaviors in temporal order due to several intricacies: 1), Temporal Evolution of Interests: User interests are not static; they evolve over time. The factors driving these changes are often unobserved or hidden within the data. A failure to capture this temporal evolution leads to reducing the recommendation process to simplistic methods like majority counting, which do not truly reflect individual user preferences. 2), Alternating Shortand Long-Term Correlations: User behavior often exhibits a mix of shortand long-term interests. For instance, a user might consistently enjoy fantasy movies (long-term interest) but occasionally watch a comedy for a family 7
id: 326aa47be82ed40b3a7c2b8be655b158 - page: 7
Under review Table 1: Dataset Statistics Dataset # Users # Items # Interactions Avg. Actions of Users Avg. Actions of Items ML-1M 6,041 Games 50,567 2,028 Luxury 3,707 16,860 936 1,000,209 389,718 18,005 165.60 7.71 8.88 269.89 23.12 19.26 event (short-term deviation). This interplay between consistent long-term interests and intermittent short-term preferences adds complexity to understanding user preferences over time.
id: 16d66d267ee95ad8cb1adc1d9f0f7fa4 - page: 8
The Divergent Thinking (DT) approach addresses these complexities to a certain degree in understanding user preferences, yet it faces several limitations: First, Difficulty in Preference Analysis: Generating a comprehensive user preference analysis from raw sequences is challenging, especially with complex, evolving sequences. In such scenarios, the LLM may default to simply counting the majority of occurrences, which can oversimplify and misrepresent the true, nuanced preferences of the user. Second, Vulnerability to Hallucination: DT relies on the internal reasoning capabilities of the LLM. If this reasoning is flawed or invalid, it can lead to hallucination where the LLM generates outputs based on incorrect or unfounded assumptions. As a result, the reranking based on these outputs may lead to inaccurate or misleading recommendations. Third, Decision-making Complexity with Multiple Aspects: While DT provides insights into user preferences from various aspects, it can be difficult
id: 5b590b962c5015a9783bead13a7a4fb1 - page: 8
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "qZ_KNOeymhzXtWOCYKwLPkN6sKs6ONtMHb9MT07BK0Q", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "qZ_KNOeymhzXtWOCYKwLPkN6sKs6ONtMHb9MT07BK0Q", "level": 2}'