Academia is thriving like never before. Thousands of papers are submitted to conferences on hot research topics, such as artificial intelligence and computer vision. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems employ statistical topic models from machine learning to characterize the content of papers and automate their assignment to reviewers.
In this keynote talk, we explore the attack surface introduced by entrusting the matching of reviewers to machine-learning algorithms. In particular, we introduce an attack that modifies a given paper so that it selects its own reviewers. Technically, this attack builds on a novel optimization strategy that alternates between fooling the topic model and preserving the semantics of the document. In an empirical evaluation with a (simulated) conference, our attack successfully selects and removes reviewers in different scenarios, while the tampered papers remain indistinguishable from innocuous submissions to human readers. The talk is based on a paper by Eisenhofer & Quiring et al. published at the USENIX Security Symposium in 2023.