Robust Blind Text Image Deblurring via Maximum Consensus Framework

Authors

  • Zijian Min Inha University
  • Gundu Mohamed Hassan Inha University
  • Geun-Sik Jo Inha University

DOI:

https://doi.org/10.1609/aaai.v38i5.28220

Keywords:

CV: Computational Photography, Image & Video Synthesis, CSO: Constraint Optimization, CV: Learning & Optimization for CV, General

Abstract

The blind text image deblurring problem presents a formidable challenge, requiring the recovery of a clean and sharp text image from a blurry version with an unknown blur kernel. Sparsity-based strategies have demonstrated their efficacy by emphasizing the sparse priors of the latent image and kernel. However, these existing strategies have largely neglected the influence of additional noise, imposing limitations on their performance. To overcome this limitation, we propose a novel framework designed to effectively mitigate the impact of extensive noise prevalent in blurred images. Our approach centers around a robust Maximum Consensus Framework, wherein we optimize the quantity of interest from the noisy blurry image based on the maximum consensus criterion. Furthermore, we propose the integration of the Alternating Direction Method of Multipliers (ADMM) and the Half-Quadratic Splitting (HQS) method to address the computationally intractable L0 norm problem. This innovative strategy enables improvements in the deblurring performance of blurry text images with the additional synthetic noise. Experimental evaluations conducted on various noisy blurry text images demonstrate the superiority of the proposed approach over existing methods.

Published

2024-03-24

How to Cite

Min, Z., Hassan, G. M., & Jo, G.-S. (2024). Robust Blind Text Image Deblurring via Maximum Consensus Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4242–4250. https://doi.org/10.1609/aaai.v38i5.28220

Issue

Section

AAAI Technical Track on Computer Vision IV