RMLer: Synthesizing Novel Objects Across Diverse Categories via Reinforcement Mixing Learning
DOI:
https://doi.org/10.1609/aaai.v40i8.37552Abstract
Novel object synthesis by integrating distinct textual concepts from diverse categories remains a significant challenge in text-to-image generation. Existing methods often suffer from insufficient concept mixing, lack of rigorous evaluation, and suboptimal outputs, resulting in conceptual imbalance, superficial combinations, or mere juxtapositions. To address these limitations, we propose Reinforcement Mixing Learning (RMLer), a framework that formulates cross-category concept fusion as a reinforcement learning problem: mixed features serve as states, mixing strategies as actions, and visual outcomes as rewards. Specifically, we design an MLP policy network to predict dynamic coefficients for blending cross-category text embeddings. We further introduce visual rewards based on (1) semantic similarity and (2) compositional balance between the fused object and its constituent concepts, and optimize the policy via proximal policy optimization. At inference time, a selection strategy leverages these rewards to curate the highest-quality fused objects. Extensive experiments demonstrate that RMLer synthesizes coherent, high-fidelity objects from diverse categories and consistently outperforms existing methods. Our work provides a robust framework for generating novel visual concepts, with promising applications in film, gaming, and design.Downloads
Published
2026-03-14
How to Cite
Li, J., Chen, Z., Chen, H., Chen, S., & Yang, J. (2026). RMLer: Synthesizing Novel Objects Across Diverse Categories via Reinforcement Mixing Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 6262–6270. https://doi.org/10.1609/aaai.v40i8.37552
Issue
Section
AAAI Technical Track on Computer Vision V