"Goal-Guided Generative Prompt Injection Attack on Large Language Models."

Chong Zhang et al. (2024)

Details and statistics

DOI: 10.1109/ICDM59182.2024.00119

access: closed

type: Conference or Workshop Paper

metadata version: 2025-03-04