Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation

Authors

  • Katherine M. Collins University of Cambridge
  • Najoung Kim Google DeepMind
  • Yonatan Bitton Google Research
  • Verena Rieser Google DeepMind
  • Shayegan Omidshafiei Field AI
  • Yushi Hu University of Washington
  • Sherol Chen Google DeepMind
  • Senjuti Dutta University of Tennessee
  • Minsuk Chang Google Research
  • Kimin Lee Korean Advanced Institute for Science and Technology
  • Youwei Liang University of California, San Diego
  • Georgina Evans Google DeepMind
  • Sahil Singla Google DeepMind
  • Gang Li Google Research
  • Adrian Weller University of Cambridge The Alan Turing Institute
  • Junfeng He Google Research
  • Deepak Ramachandran Google DeepMind
  • Krishnamurthy Dj Dvijotham ServiceNow

DOI:

https://doi.org/10.1609/aies.v7i1.31637

Abstract

Human feedback plays a critical role in learning and refining reward models for text-to-image generation, but the optimal form the feedback should take for learning an accurate reward function has not been conclusively established. This paper investigates the effectiveness of fine-grained feedback which captures nuanced distinctions in image quality and prompt-alignment, compared to traditional coarse-grained feedback (for example, thumbs up/down or ranking between a set of options). While fine-grained feedback holds promise, particularly for systems catering to diverse societal preferences, we show that demonstrating its superiority to coarse-grained feedback is not automatic. Through experiments on real and synthetic preference data, we surface the complexities of building effective models due to the interplay of model choice, feedback type, and the alignment between human judgment and computational interpretation. We identify key challenges in eliciting and utilizing fine-grained feedback, prompting a reassessment of its assumed benefits and practicality. Our findings -- e.g., that fine-grained feedback can lead to worse models for a fixed budget, in some settings; however, in controlled settings with known attributes, fine grained rewards can indeed be more helpful -- call for careful consideration of feedback attributes and potentially beckon novel modeling approaches to appropriately unlock the potential value of fine-grained feedback in-the-wild.

Downloads

Published

2024-10-16

How to Cite

Collins, K. M., Kim, N., Bitton, Y., Rieser, V., Omidshafiei, S., Hu, Y., Chen, S., Dutta, S., Chang, M., Lee, K., Liang, Y., Evans, G., Singla, S., Li, G., Weller, A., He, J., Ramachandran, D., & Dvijotham, K. D. (2024). Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 293-303. https://doi.org/10.1609/aies.v7i1.31637