Conference item
What makes and breaks safety fine-tuning? a mechanistic study
- Abstract:
- Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment. To better understand the underlying factors that make models safe via safety fine-tuning, we design a synthetic data generation framework that captures salient aspects of an unsafe input by modeling the interaction between the task the model is asked to perform (e.g., "design") versus the specific concepts the task is asked to be performed upon (e.g., a "cycle" vs. a "bomb"). Using this, we investigate three well-known safety fine-tuning methods---supervised safety fine-tuning, direct preference optimization, and unlearning---and provide significant evidence demonstrating that these methods minimally transform MLP weights to specifically align unsafe inputs into its weights' null space. This yields a clustering of inputs based on whether the model deems them safe or not. Correspondingly, when an adversarial input (e.g., a jailbreak) is provided, its activations are closer to safer samples, leading to the model processing such an input as if it were safe. We validate our findings, wherever possible, on real-world models---specifically, Llama-2 7B and Llama-3 8B.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 6.5MB, Terms of use)
-
- Publication website:
- https://openreview.net/forum?id=BS2CbUkJpy
Authors
+ Engineering and Physical Sciences Research Council
More from this funder
- Funder identifier:
- https://ror.org/0439y7842
- Grant:
- EP/W002981/1
- Publisher:
- OpenReview
- Host title:
- Proceedings of the Mechanistic Interpretability Workshop 2024 hosted by the 13th International Conference on Machine Learning (ICML 2024)
- Publication date:
- 2024-07-31
- Acceptance date:
- 2024-05-02
- Event title:
- Mechanistic Interpretability Workshop 2024 hosted by the 13th International Conference on Machine Learning (ICML 2024)
- Event location:
- Vienna, Austria
- Event website:
- https://icml2024mi.pages.dev/
- Event start date:
- 2024-07-27
- Event end date:
- 2024-07-27
- Language:
-
English
- Keywords:
- Pubs id:
-
2036881
- Local pid:
-
pubs:2036881
- Deposit date:
-
2024-10-07
Terms of use
- Copyright holder:
- Jain et al.
- Copyright date:
- 2024
- Rights statement:
- © The Authors 2024. This is an open access article under the CC-BY license.
- Notes:
- This paper was presented at the Mechanistic Interpretability Workshop 2024 hosted by the 13th International Conference on Machine Learning (ICML 2024), 27th July 2024, Vienna, Austria.
- Licence:
- CC Attribution (CC BY)
If you are the owner of this record, you can report an update to it here: Report update to this record