CreBench: Human-Aligned Creativity Evaluation from Idea to Process to Product
DOI:
https://doi.org/10.1609/aaai.v40i32.39962Abstract
Human-defined creativity is highly abstract, posing a challenge for multimodal large language models (MLLMs) to comprehend and assess creativity that aligns with human judgments. The absence of an existing benchmark further exacerbates this dilemma. To this end, we propose CreBench, which consists of two key components: 1) an evaluation benchmark covering the multiple dimensions from creative idea to process to products; 2) CreMIT (Creativity Multimodal Instruction Tuning dataset), a multimodal creativity evaluation dataset, consisting of 2.2K diverse-sourced multimodal data, 79.2K human feedbacks and 4.7M multityped instructions. Specifically, to ensure MLLMs can handle diverse creativity-related queries, we prompt GPT to refine the human feedback to activate stronger creativity assessment capabilities. CreBench serves as a foundation for building MLLMs that understand human-aligned creativity. Based on the CreBench, we fine-tune open-source general MLLMs, resulting in CreExpert, a multimodal creativity evaluation expert model. Extensive experiments demonstrate that the proposed CreExpert models achieve significantly better alignment with human creativity evaluation compared to state-ofthe-art MLLMs, including the most advanced GPT-4V and Gemini-Pro-Vision.Published
2026-03-14
How to Cite
Xue, K., Li, C., Ou, Z., Zhang, G., Lu, K., Lyu, S., … Cen, J. (2026). CreBench: Human-Aligned Creativity Evaluation from Idea to Process to Product. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27441–27449. https://doi.org/10.1609/aaai.v40i32.39962
Issue
Section
AAAI Technical Track on Machine Learning IX