A paper list on large language models for ranking. This list is currently maintained by Qi Liu at Gaoling School of Artificial Intelligence, Renmin University of China.
-
Document Ranking with a Pretrained Sequence-to-Sequence Model. EMNLP 2020.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin.
2020.5 [pdf] -
The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. ArXiv.
Ronak Pradeep, Rodrigo Nogueira, Jimmy Lin.
2021.1 [pdf] -
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022.
Devendra Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer.
2022.4 [pdf] -
Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. EMNLP 2023.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren.
2023.4 [pdf] -
Large Language Models Are Effective Text Rankers with Pairwise Ranking Prompting. NAACL 2024.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky.
2023.6 [pdf] -
Fine-Tuning LLaMA for Multi-Stage Text Retrieval. SIGIR 2024.
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin.
2023.10 [pdf]
-
RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses. SIGIR 2023.
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky.
2022.10 [pdf] -
Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models. NAACL 2024.
Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, and Ferhan Ture.
2023.10 [pdf] -
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels. NAACL 2024.
Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, and Michael Berdersky.
2023.10 [pdf] -
PaRaDe: Passage Ranking Using Demonstrations with Large Language Models. EMNLP 2023.
Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui
2023.10 [pdf] -
A Two-Stage Adaptation of Large Language Models for Text Ranking. ACL 2024.
Longhui Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang
2023.11 [pdf] -
ListT5: Listwise Reranking with Fusion-in-Decoder Improves Zero-Shot Retrieval. ACL 2024.
Soyoung Yoon, Eunbi Choi, Jiyeon Kim, Hyeongu Yun, Yireun Kim, and Seung-won Hwang.
2024.2 [pdf] -
Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing. EMNLP 2024
Le Yan, Zhen Qin, Honglei Zhuang, Rolf Jagerman, Xuanhui Wang, Michael Bendersky, and Harrie Oosterhuis.
2024.4 [pdf] -
Generating Diverse Criteria On-the-Fly to Improve Point-Wise LLM Rankers. ArXiv
Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Le Yan, and Yue Zhang.
2024.4 [pdf] -
LLM-RankFusion: Mitigating Intrinsic Inconsistency in LLM-Based Ranking. ArXiv
Yifan Zeng, Ojas Tendolkar, Raymond Baartmans, Qingyun Wu, Huazheng Wang, and Lizhong Chen.
2024.5 [pdf] -
TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy. ArXiv
Yiqun Chen, Qi Liu, Yi Zhang, Weiwei Sun, Daiting Shi, Jiaxin Mao, and Dawei Yin.
2024.6 [pdf] -
Improving Zero-Shot LLM Re-Ranker with Risk Minimization. EMNLP 2024
Xiaowei Yuan, Zhao Yang, Yequan Wang, Jun Zhao, and Kang Liu.
2024.6 [pdf] -
APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking. ArXiv.
Can Jin, Hongwu Peng, Shiyu Zhao, Zhenting Wang, Wujiang Xu, Ligong Han, Jiahui Zhao, Kai Zhong, Sanguthevar Rajasekaran, and Dimitris N. Metaxas.
2024.6 [pdf] -
DemoRank: Selecting Effective Demonstrations for Large Language Models in Ranking Task. ArXiv
Wenhan Liu, Yutao Zhu, and Zhicheng Dou.
2024.6 [pdf] -
ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation. ArXiv.
Yuelyu Ji, Zhuochun Li, Rui Meng, and Daqing He.
2024.10 [pdf]
-
A Setwise Approach for Effective and Highly Efficient Zero-Shot Ranking with Large Language Models. SIGIR 2024.
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon.
2023.10 [pdf] -
Top-Down Partitioning for Efficient List-Wise Ranking. ArXiv.
Andrew Parry, Sean MacAvaney, and Debasis Ganguly.
2024.5 [pdf] -
Leveraging Passage Embeddings for Efficient Listwise Reranking with Large Language Models. ArXiv.
Qi Liu, Bo Wang, Nan Wang, and Jiaxin Mao.
2024.6 [pdf] -
FIRST: Faster Improved Listwise Reranking with Single Token Decoding. EMNLP 2024.
Revanth Gangi Reddy, JaeHyeok Doo, Yifei Xu, Md Arafat Sultan, Deevya Swain, Avirup Sil, Heng Ji. 2024.6 [pdf] -
Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers. ArXiv.
Shijie Chen, Bernal Jiménez Gutiérrez, and Yu Su. 2024.10 [pdf]
-
A Thorough Comparison of Cross-Encoders and LLMs for Reranking SPLADE. ArXiv.
Hervé Déjean, Stéphane Clinchant, and Thibault Formal.
2024.3 [pdf] -
An Investigation of Prompt Variations for Zero-Shot LLM-Based Rankers. ArXiv.
Shuoqi Sun, Shengyao Zhuang, Shuai Wang, and Guido Zuccon.
2024.6 [pdf] -
Ranked List Truncation for Large Language Model-Based Re-Ranking. SIGIR 2024.
Chuan Meng, Negar Arabzadeh, Arian Askari, Mohammad Aliannejadi, and Maarten de Rijke.
2024.4 [pdf] -
Probing Ranking LLMs: Mechanistic Interpretability in Information Retrieval. ArXiv.
Tanya Chowdhury, and James Allan.
2024.10 [pdf]
-
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models. ArXiv.
Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023.9 [pdf] -
RankZephyr: Effective and Robust Zero-Shot Listwise Reranking Is a Breeze! ArXiv.
Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023.12 [pdf] -
Rank-without-GPT: Building GPT-Independent Listwise Rerankers on Open-Source Large Language Models. ArXiv.
Xinyu Zhang, Sebastian Hofstätter, Patrick Lewis, Raphael Tang, and Jimmy Lin. 2023.12 [pdf]