Researchers found an easy way to retrain publicly available neural nets so they would answer in-depth questions, such as how to cheat on an exam, find pornography, or even kill their neighbor.
Originally appeared here:
Generative AI can easily be made malicious despite guardrails, say scholars