Abstract—Large language models (LLMs) have demonstrated remarkable capabilities and have been extensively deployed across various domains, including recommender systems. Nu- merous studies have employed specialized prompts to harness the in-context learning capabilities intrinsic to LLMs. For example, LLMs are prompted to act as zero-shot rankers for listwise ranking, evaluating candidate items generated by a retrieval model for recommendation. Recent research further use instruc- tion tuning technique to align LLM with human preference for more promising recommendations. Despite its potential, current research overlooks the integration of multiple ranking tasks to enhance model performance. Moreover, the signal from the conventional recommendation model is not integrated into the LLM, limiting the current system performance.In this paper, we introduce RecRanker, tailored for in- struction tuning LLM to serve as the Ranker for top-k Recommendations. Specifically, we introduce importance-aware sampling, clustering-based sampling, and penalty for repetitive sampling for sampling high-quality, representative, and diverseusers as training data. To enhance the prompt, we introduce a po- sition shifting strategy to mitigate position bias and augment the prompt with auxiliary information from conventional recommen- dation models, thereby enriching the contextual understanding of the LLM. Subsequently, we utilize the sampled data to assem- ble an instruction-tuning dataset with the augmented prompt comprising three distinct ranking tasks: pointwise, pairwise, and listwise rankings. We further propose a hybrid ranking method to enhance the model performance by ensembling these ranking tasks. Our empirical evaluations demonstrate the effectiveness of our proposed RecRanker in both direct and sequential recommendation scenarios.
# Search
curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "HoYKS_8cc8jN4cp-Fc4GaFgCVc6KxaMACJenAOOoO9I", "query": "What is alexanDRIA library?"}'
# Query
curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "HoYKS_8cc8jN4cp-Fc4GaFgCVc6KxaMACJenAOOoO9I", "level": 2}'