LLMGuard: Guarding against Unsafe LLM Behavior

Authors

  • Shubh Goyal IIT Jodhpur
  • Medha Hira IIITD
  • Shubham Mishra IIT Jodhpur
  • Sukriti Goyal IIT Jodhpur
  • Arnav Goel IIITD
  • Niharika Dadu IIT Jodhpur
  • Kirushikesh DB IBM Research India
  • Sameep Mehta IBM, India Research Lab
  • Nishtha Madaan IBM Research

DOI:

https://doi.org/10.1609/aaai.v38i21.30566

Keywords:

Artificial Intelligence

Abstract

Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.

Downloads

Published

2024-03-24

How to Cite

Goyal, S., Hira, M., Mishra, S., Goyal, S., Goel, A., Dadu, N., DB, K., Mehta, S., & Madaan, N. (2024). LLMGuard: Guarding against Unsafe LLM Behavior. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23790-23792. https://doi.org/10.1609/aaai.v38i21.30566