"An LLM can Fool Itself: A Prompt-Based Adversarial Attack."

Xilie Xu et al. (2023)

Details and statistics

DOI: 10.48550/ARXIV.2310.13345

access: open

type: Informal or Other Publication

metadata version: 2023-12-13

a service of  Schloss Dagstuhl - Leibniz Center for Informatics