Abstract of the paper: Modeling long user histories plays a pivotal role in enhancing recommendation systems, al- lowing to capture users’ evolving preferences, resulting in more precise and personalized rec- ommendations. In this study, we tackle the challenges of modeling long user histories for preference understanding in natural language. Specifically, we introduce a new User Embed- ding Module (UEM) that efficiently processes user history in free-form text by compressing and representing them as embeddings, to use them as soft prompts to a LM. Our experiments demonstrate the superior capability of this ap- proach in handling significantly longer histories compared to conventional text-based methods, yielding substantial improvements in predic- tive performance. Models trained using our ap- proach exhibit substantial enhancements, with up to 0.21 and 0.25 F1 points improvementover the text-based prompting baselines. The main contribution of this research is to demon- strate the ability to bias language models via user signals.https://doi.org/10.48550/arXiv.2401.04858
# Search
curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "0pt1V_TSiZLIBmkYFTXzWUsIXixfEyz04ZlzH7bxDBo", "query": "What is alexanDRIA library?"}'
# Query
curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "0pt1V_TSiZLIBmkYFTXzWUsIXixfEyz04ZlzH7bxDBo", "level": 2}'