RobustMask: Certified Robustness against Adversarial Neural Ranking Attack via Randomized Masking
Abstract
Neural ranking models have achieved remarkable progress and are now widely deployed in real-world applications such as Retrieval-Augmented Generation (RAG). However, like other neural architectures, they remain vulnerable to adversarial manipulations: subtle character-, word-, or phrase-level perturbations can poison retrieval results and artificially promote targeted candidates, undermining the integrity of search engines and downstream systems. Existing defenses either rely on heuristics with poor generalization or on certified methods that assume overly strong adversarial knowledge, limiting their practical use. To address these challenges, we propose RobustMask, a novel defense that combines the context-prediction capability of pretrained language models with a randomized masking-based smoothing mechanism. Our approach strengthens neural ranking models against adversarial perturbations at the character, word, and phrase levels. Leveraging both the pairwise comparison ability of ranking models and probabilistic statistical analysis, we provide a theoretical proof of RobustMask's certified top-K robustness. Extensive experiments further demonstrate that RobustMask successfully certifies over 20% of candidate documents within the top-10 ranking positions against adversarial perturbations affecting up to 30% of their content. These results highlight the effectiveness of RobustMask in enhancing the adversarial robustness of neural ranking models, marking a significant step toward providing stronger security guarantees for real-world retrieval systems.
Links & Resources
Authors
Cite This Paper
Liu, J., Chen, Z., Zhu, R., Chen, M., Gong, Y., Lu, W., Wang, X. (2025). RobustMask: Certified Robustness against Adversarial Neural Ranking Attack via Randomized Masking. arXiv preprint arXiv:2512.23307.
Jiawei Liu, Zhuo Chen, Rui Zhu, Miaokun Chen, Yuyang Gong, Wei Lu, and Xiaofeng Wang. "RobustMask: Certified Robustness against Adversarial Neural Ranking Attack via Randomized Masking." arXiv preprint arXiv:2512.23307 (2025).