Defending Against Knowledge Poisoning Attacks During Retrieval-Augmented Generation

Kennedy Edemacu, Vinay M. Shashidhar, Micheal Tuape, Dan Abudu, Beakcheol Jang, Jong Wook Kim

Research output: Preprint or Working paperPreprint

1 Downloads (Pure)

Abstract

Retrieval-Augmented Generation (RAG) has emerged as a powerful approach to boost the capabilities of large language models (LLMs) by incorporating external, up-to-date knowledge sources. However, this introduces a potential vulnerability to knowledge poisoning attacks, where attackers can compromise the knowledge source to mislead the generation model. One such attack is the PoisonedRAG in which the injected adversarial texts steer the model to generate an attacker-chosen response to a target question. In this work, we propose novel defense methods, FilterRAG and ML-FilterRAG, to mitigate the PoisonedRAG attack. First, we propose a new property to uncover distinct properties to differentiate between adversarial and clean texts in the knowledge data source. Next, we employ this property to filter out adversarial texts from clean ones in the design of our proposed approaches. Evaluation of these methods using benchmark datasets demonstrate their effectiveness, with performances close to those of the original RAG systems.
Original languageEnglish
Number of pages9
Publication statusPublished - 4 Aug 2025

Bibliographical note

This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use: https://creativecommons.org/licenses/by/4.0/

Keywords

  • cs.LG
  • cs.IR

Fingerprint

Dive into the research topics of 'Defending Against Knowledge Poisoning Attacks During Retrieval-Augmented Generation'. Together they form a unique fingerprint.

Cite this