MMM
YYYY
Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm
预训练和微调范式下的相关性排序建模
事前訓練と微調整パラダイム下の関連性ランキングのモデリング
예비 훈련 과 마이크로 모드 에서 의 상관 성 정렬 모델 링
Modelo de orden de correlación basado en paradigmas de pre - entrenamiento y ajuste fino
Modélisation de l'ordre de corrélation basée sur le paradigme de pré - formation et de réglage fin
модель ранжирования по типу ранжирования
Lin Bo ¹, Liang Pang 庞亮 ³, Gang Wang ⁴, Jun Xu 徐君 ², XiuQiang He 何秀强 ⁴, Ji-Rong Wen 文继荣 ²
¹ School of Information, Renmin University of China, Beijing, China
中国 北京 中国人民大学信息学院
² Gaoling School of Artificial Intelligence, Renmin University of China, , Beijing, China
中国 北京 中国人民大学高瓴人工智能学院
³ Institute of Computing Technology, Chinese Academy of Sciences
中国 北京 中国科学院计算技术研究所
⁴ Huawei Noah’s Ark Lab
中国 香港 华为诺亚方舟实验室
arXiv, 12 August 2021
Abstract

Recently, pre-trained language models such as BERT have been applied to document ranking for information retrieval. These methods usually first pre-train a general language model on an unlabeled large corpus and then conduct ranking-specific fine-tuning on expert-labeled relevance datasets. Though reliminary successes have been observed in a variety of IR tasks, a lot of room still remains for further improvement.

Ideally, an IR system would model relevance from a user-system dualism: the user's view and the system's view. User's view judges the relevance based on the activities of “real users” while the system's view focuses on the relevance signals from the system side, e.g., from the experts or algorithms, etc. Inspired by the user-system relevance views and the success of pre-trained language models, in this paper we propose a novel ranking framework called Pre-Rank that takes both user's view and system's view into consideration, under the pre-training and fine-tuning paradigm. Specifically, to model the user's view of relevance, Pre-Rank pre-trains the initial query-document representations based on a large-scale user activities data such as the click log. To model the system's view of relevance, Pre-Rank further fine-tunes the model on expert-labeled relevance data. More importantly, the pre-trained representations, are fine-tuned together with handcrafted learning-to-rank features under a wide and deep network architecture. In this way, Pre-Rank can model the relevance by incorporating the relevant knowledge and signals from both real search users and the IR experts.

To verify the effectiveness of Pre-Rank, we showed two implementations by using BERT and SetRank as the underlying ranking model, respectively. Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-ofthe-art performances. The results demonstrate the effectiveness of Pre-Rank in combining the user-system views of relevance.
arXiv_1
arXiv_2
arXiv_3
arXiv_4
Reviews and Discussions
https://www.hotpaper.io/index.html
Ultrafast dynamics of femtosecond laser-induced high spatial frequency periodic structures on silicon surfaces
Optical scanning endoscope via a single multimode optical fiber
Self-polarized RGB device realized by semipolar micro-LEDs and perovskite-in-polymer films for backlight applications
A highly sensitive LITES sensor based on a multi-pass cell with dense spot pattern and a novel quartz tuning fork with low frequency
Multi-wavelength nanowire micro-LEDs for future high speed optical communication
Luminescence regulation of Sb3+ in 0D hybrid metal halides by hydrogen bond network for optical anti-counterfeiting
Breaking the optical efficiency limit of virtual reality with a nonreciprocal polarization rotator
Simultaneously realizing thermal and electromagnetic cloaking by multi-physical null medium
Generation of lossy mode resonances (LMR) using perovskite nanofilms
Acousto-optic scanning multi-photon lithography with high printing rate
Tailoring electron vortex beams with customizable intensity patterns by electron diffraction holography
Miniature tunable Airy beam optical meta-device



Previous Article                                Next Article
About
|
Contact
|
Copyright © Hot Paper