GameTileNet: A Semantic Dataset for Low-Resolution Game Art in Procedural Content Generation
DOI:
https://doi.org/10.1609/aiide.v21i1.36805Abstract
GameTileNet is a dataset designed to provide semantic labels for low-resolution digital game art, advancing procedural content generation (PCG) and related AI research as a vision-language alignment task. Large Language Models (LLMs) and image-generative AI models have enabled indie developers to create visual assets, such as sprites, for game interactions. However, generating visuals that align with game narratives remains challenging due to inconsistent AI outputs, requiring manual adjustments by human artists. The diversity of visual representations in automatically generated game content is also limited because of the imbalance in distributions across styles for training data. GameTileNet addresses this by collecting artist-created game tiles from OpenGameArt.org under Creative Commons licenses and providing semantic annotations to support narrative-driven content generation. The dataset introduces a pipeline for object detection in low-resolution tile-based game art (e.g., 32x32 pixels) and annotates semantics, connectivity, and object classifications. GameTileNet is a valuable resource for improving PCG methods, supporting narrative-rich game content, and establishing a baseline for object detection in low-resolution, non-photorealistic images.Downloads
Published
2025-11-07
How to Cite
Chen, Y.-C., & Jhala, A. (2025). GameTileNet: A Semantic Dataset for Low-Resolution Game Art in Procedural Content Generation. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 21(1), 12-21. https://doi.org/10.1609/aiide.v21i1.36805
Issue
Section
Full Technical