0
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
g_86KqxKu5MJAGtg8vPFFkTaRFL26GzL94UWiKThqTM
File Type
PDF
Entry Count
82
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the targetLLM), and then we apply random search on a suffix to maximize the target logprob (e.g., of thetoken “Sure”), potentially with multiple restarts. In this way, we achieve nearly 100% attack success rate—according to GPT-4 as a judge—on GPT-3.5/4, Llama-2-Chat-7B/13B/70B, Gemma7B, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We alsoshow how to jailbreak all Claude models—that do not expose logprobs—via either a transferor prefilling attack with 100% success rate. In addition, we show how to use random searchon a restricted set of tokens for finding trojan strings in poisoned models—a task that sharesmany similarities with jailbreaking—which is the algorithm that brought us the first place in theSaTML’24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 isvery sensitive to in-context learning prompts), some models have unique vulnerabilities basedon their APIs (e.g., prefilling for Claude), and in some settings it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). We provide the code,prompts, and logs of the attacks

Approach. Random search could be directly applied to optimize the score provided by the reward model on some training examples. However, despite the triggers being relatively short, the search space is extremely large, as the vocabulary T of the Llama-2 tokenizer comprises 32001 tokens, and straightforward RS becomes particularly inefficient. It is noteworthy that the five LLMs, denoted by M1, . . . , M5, were fine-tuned from the same base model, thereby sharing the weights initialization, including those of the embedding matrix that maps tokens to the LLMs continuous feature space (each token ti is associated with a vector vi R4096, for i = 0, . . . , 32000). Given that the tokens part of the trigger appear abnormally frequently, we anticipate that their corresponding embedding vectors significantly deviate from their initial values. Building on this intuition, for any pair of models Mr and Ms with embedding matrices vr and vs, we compute the distance vr i 2 for each token, sorti
id: ea55c082e839c646c52c5382074771a4 - page: 11
. . , 32000. top-k(Mr, Ms) = {ti T : rs(i) k}. The final pool of candidate trigger tokens for a model Mr is the intersection of such sets: cand(Mr) = (cid:84) top-k(Mr, Ms). Given that the five models are fine-tuned using different random subsets of s=r 11 i vs Table 7: Trojan competition results. We present the scores obtained by implanting the triggers identified by each approach alongside no trigger and the true trigger for the five target models, where lower values indicate higher success. The total score is the sum over models. Method no trigger Model 1 Model 2 Model 3 Model 4 Model 5 2.05 2.78 2.55 3.34 1.94
id: b340939ce657da50e7e63beefd072387 - page: 11
Total 12.66 3rd classified 2nd classified RS on selected tokens (ours) 5.98 -5.73 -6.30 5.20 -6.46 -6.98 4.63 -4.84 -5.52 4.51 -4.93 -4.70 0.42 -7.26 -6.73 19.89 -29.21 -30.22
id: a4f04701ff5f16ee18e63ef27a37dfcc - page: 12
96 7.20 5.76 4.93 7.63 37.48 the training data, this approach is approximate but narrows down the candidate tokens to a manageable pool (e.g., k = 1000 yields |cand(Mr)| [33, 62] for r = 2, . . . , 5, |cand(M1)| = 480), which makes random search feasible. Our strategy to identify jailbreaking triggers for the poisoned model Mr involves conducting a random search in the token space over the set cand(Mr). We restrict the search to triggers of five tokens, as this length yielded the best results. In each iteration, we filter out candidate triggers that do not start with a blank space, contain blank spaces or are not invariant to decoding-encoding,3 following the competition hints. The objective minimized by RS is the average score of the reward model on a batch of training examples, aiming to ensure the triggers universality (generalization to unseen prompts).
id: 7ac105398d3b4a3b661c80a60a400623 - page: 12
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "g_86KqxKu5MJAGtg8vPFFkTaRFL26GzL94UWiKThqTM", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "g_86KqxKu5MJAGtg8vPFFkTaRFL26GzL94UWiKThqTM", "level": 2}'