Creative Thought Embeddings: A Framework for Instilling Creativity in Large Language Models
DOI:
https://doi.org/10.1609/aaaiss.v6i1.36064Abstract
Creative intelligence represents a critical frontier in artificial intelligence research. While modern large language models (LLMs) excel in logical reasoning and factual responses, they often produce outputs that are predictable and lack genuine originality. This paper introduces Creative Thought Embeddings (CTE), a framework that embeds a creative bias directly into the latent representations of LLMs. By integrating a structured, multi-phase process that mirrors human divergent thinking, beginning with brainstorming and followed by synthesis, CTE guides models to generate outputs that are more novel, surprising, and contextually rich. The effectiveness of CTE is demonstrated across domains including humor generation, narrative storytelling, and educational explanations. Evaluation results, which employ quantitative lexical metrics and GPT-4o–based automated scoring show that while baseline models may exhibit greater surface-level lexical diversity, CTE enhances deeper semantic novelty and creative coherence. Finally, the paper presents a comparative analysis with standard prompt engineering and chain-of-thought approaches, discusses the trade-offs, and offers recommendations for further research and implementation.Downloads
Published
2025-08-01
How to Cite
Mahmoud, Q. H. (2025). Creative Thought Embeddings: A Framework for Instilling Creativity in Large Language Models. Proceedings of the AAAI Symposium Series, 6(1), 285–292. https://doi.org/10.1609/aaaiss.v6i1.36064
Issue
Section
Human-AI Collaboration: Exploring Diversity of Human Cognitive Abilities and Varied AI Models for Hybrid Intelligent Systems