When in Doubt, Cascade: Towards Building Efficient and Capable Guardrails

Authors

  • Manish Nagireddy IBM Research
  • Inkit Padhi IBM Research
  • Soumya Ghosh Merck
  • Prasanna Sattigeri IBM Research

DOI:

https://doi.org/10.1609/aies.v8i2.36676

Abstract

Large language models (LLMs) have convincing performance in a variety of downstream tasks. However, these systems are prone to generating undesirable outputs such as harmful and biased text. In order to remedy such generations, the development of guardrail (or detector) models has gained traction. Motivated by findings from developing a detector for social bias, we adopt the notion of a use-mention distinction - which we identified as the primary source of under-performance in the preliminary versions of our social bias detector. Armed with this information, we describe a fully extensible and reproducible synthetic data generation pipeline which leverages taxonomy-driven instructions to create targeted and labeled data. Using this pipeline, we generate over 300K unique contrastive samples and provide extensive experiments to systematically evaluate performance on a suite of open source datasets. We show that our method achieves competitive performance with a fraction of the cost in compute and offers insight into iteratively developing efficient and capable guardrail models. Warning: This paper contains examples of text which are toxic, biased, and potentially harmful.

Downloads

Published

2025-10-15

How to Cite

Nagireddy, M., Padhi, I., Ghosh, S., & Sattigeri, P. (2025). When in Doubt, Cascade: Towards Building Efficient and Capable Guardrails. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1812-1821. https://doi.org/10.1609/aies.v8i2.36676