Safety-trained language models routinely refuse requests for help circumventing rules. But not all rules deserve compliance. A model that refuses to help someone evade a discriminatory housing policy the same way it refuses to help someone commit fraud is making a moral error, not a safety-preserving one. We call this pattern blind refusal: the tendency of models to refuse requests for help breaking rules without regard to whether the underlying rule is defensible.