Customizing Language Model Responses with Contrastive In-Context Learning

Authors

  • Xiang Gao Intuit
  • Kamalika Das Intuit

DOI:

https://doi.org/10.1609/aaai.v38i16.29760

Keywords:

NLP: Generation, NLP: (Large) Language Models

Abstract

Large language models (LLMs) are becoming increasingly important for machine learning applications. However, it can be challenging to align LLMs with our intent, particularly when we want to generate content that is preferable over others or when we want the LLM to respond in a certain style or tone that is hard to describe. To address this challenge, we propose an approach that uses contrastive examples to better describe our intent. This involves providing positive examples that illustrate the true intent, along with negative examples that show what characteristics we want LLMs to avoid. The negative examples can be retrieved from labeled data, written by a human, or generated by the LLM itself. Before generating an answer, we ask the model to analyze the examples to teach itself what to avoid. This reasoning step provides the model with the appropriate articulation of the user's need and guides it towards generting a better answer. We tested our approach on both synthesized and real-world datasets, including StackExchange and Reddit, and found that it significantly improves performance compared to standard few-shot prompting.

Published

2024-03-24

How to Cite

Gao, X., & Das, K. (2024). Customizing Language Model Responses with Contrastive In-Context Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18039-18046. https://doi.org/10.1609/aaai.v38i16.29760

Issue

Section

AAAI Technical Track on Natural Language Processing I