STELAR-VISION: Self-Topology-Aware Efficient Learning for Aligned Reasoning in Vision
DOI:
https://doi.org/10.1609/aaai.v40i44.41091Abstract
Vision-language models (VLMs) have made significant strides in reasoning, yet they often struggle with complex multimodal tasks and tend to generate overly verbose outputs. A key limitation is their reliance on chain-of-thought (CoT) reasoning, despite many tasks benefiting from alternative topologies like trees or graphs. To address this, we introduce STELAR-Vision, a training framework for topology-aware reasoning. At its core is TopoAug, a synthetic data pipeline that enriches training with diverse topological structures. Using supervised fine-tuning and reinforcement learning, we post-train Qwen2VL models with both accuracy and efficiency in mind. Additionally, we propose Frugal Learning, which reduces output length with minimal accuracy loss. On MATH-V and VLM_S2H, STELAR-Vision improves accuracy by 9.7% over its base model and surpasses the larger Qwen2VL-72B-Instruct by 7.3%. On five out-of-distribution benchmarks, it outperforms Phi-4-Multimodal-Instruct by up to 28.4% and LLaMA-3.2-11B-Vision-Instruct by up to 13.2%, demonstrating strong generalization. Compared to Chain-Only training, our approach achieves 4.3% higher overall accuracy on in-distribution datasets and consistently outperforms across all OOD benchmarks.Downloads
Published
2026-03-14
How to Cite
Li, C., Zhang, H., Yang, Z., Chen, F., Wang, Z., Bolimera, A., & Savvides, M. (2026). STELAR-VISION: Self-Topology-Aware Efficient Learning for Aligned Reasoning in Vision. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 37574–37582. https://doi.org/10.1609/aaai.v40i44.41091
Issue
Section
AAAI Special Track on AI Alignment