This is a repository of the paper Backdooring Bias into Text-to-Image Models. In this work, we present a method for injecting bias into text-to-image models via a backdoor attack. This allows an adversary to embed arbitrary biases that affect image generation for all users, including benig...