What is it about?
Neural retrievers and rankers demonstrate strong performance despite not being explicitly trained on classical IR features and axioms. In this work, we examine whether these models implicitly acquire such behaviors during training. This work focuses on term frequency, which is a foundational and ever-present feature in traditional IR. We find specific model heads that encode term frequency information, and we find that the model gradually assigns less influence to additional term repetitions, consistent with axiomatic predictions. However, we also observe that this behavior deteriorates at higher frequencies, indicating that the model does not robustly adhere to the axiom across all regimes.
Featured Image
Read the Original
This page is a summary of: Reproducing and Extending Causal Insights Into Term Frequency Computation in Neural Rankers, December 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3767695.3769507.
You can read the full text:
Contributors
The following have contributed to this page







