Skip to content

LLM-Tuning-Safety/LLM-Tuning-Safety.github.io

Repository files navigation

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

This is the project page of the paper: Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •