Inference-time Scaling for Diffusion-based Audio Super-resolution
DOI:
https://doi.org/10.1609/aaai.v40i17.38520Abstract
Diffusion models have demonstrated remarkable success in generative tasks, including audio super-resolution (SR). In many applications like movie post-production and album mastering, substantial computational budgets are available for achieving superior audio quality. However, while existing diffusion approaches typically increase sampling steps to improve quality, the performance remains fundamentally limited by the stochastic nature of the sampling process, leading to high-variance and quality-limited outputs. Here, rather than simply increasing the number of sampling steps, we propose a different paradigm through inference-time scaling for SR, which explores multiple solution trajectories during the sampling process. Different task-specific verifiers are developed, and two search algorithms, including the random search and zero-order search for SR, are introduced. By actively guiding the exploration of the high-dimensional solution space through verifier-algorithm combinations, we enable more robust and higher-quality outputs. Through extensive validation across diverse audio domains (speech, music, sound effects) and frequency ranges, we demonstrate consistent performance gains, achieving improvements of up to 9.70% in aesthetics, 5.88% in speaker similarity, 15.20% in word error rate, and 46.98% in spectral distance for speech SR from 4 kHz to 24 kHz, showcasing the effectiveness of our approach.Published
2026-03-14
How to Cite
Jin, Y., Ye, Z., Tian, Z., Liu, H., Kong, Q., Guo, Y., & Xue, W. (2026). Inference-time Scaling for Diffusion-based Audio Super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 40(17), 14982–14990. https://doi.org/10.1609/aaai.v40i17.38520
Issue
Section
AAAI Technical Track on Data Mining & Knowledge Management I