High-Throughput, High-Performance Deep Learning-Driven Light Guide Plate Surface Visual Quality Inspection Tailored for Real-World Manufacturing Environments
Keywords:Visual Quality Inspection, Light Guide Plate, Generative Synthesis, Deep Learning, Machine-driven Design Exploration
AbstractLight guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back-lit TV displays. An essential step in the manufacturing of light guide plates is the quality inspection of defects such as scratches, bright/dark spots, and impurities. This is mainly done in industry through manual visual inspection for plate pattern irregularities, which is time-consuming and prone to human error and thus act as a significant barrier to high-throughput production. Advances in deep learning-driven computer vision has led to the exploration of automated visual quality inspection of light guide plates to improve inspection consistency, accuracy, and efficiency. However, given the computational constraints and high-throughput nature of real-world manufacturing environments, the widespread adoption of deep learning-driven visual inspection systems for inspecting light guide plates in real-world manufacturing environments has been greatly limited due to high computational requirements and integration challenges of existing deep learning approaches in research literature. In this work, we introduce a fully-integrated, high-throughput, high-performance deep learning-driven workflow for light guide plate surface visual quality inspection (VQI) tailored for real-world manufacturing environments. To enable automated VQI on the edge computing within the fully-integrated VQI system, a highly compact deep anti-aliased attention condenser neural network (which we name Light-DefectNet) tailored specifically for light guide plate surface defect detection in resource-constrained scenarios was created via machine-driven design exploration with computational and “best-practices” constraints as well as L1 paired classification discrepancy loss. Experiments show that Light-DetectNet achieves a detection accuracy of ∼98.2% on the LGPSDD benchmark while having just 770K parameters (∼33× and ∼6.9× lower than ResNet-50 and EfficientNet-B0, respectively) and ∼93M FLOPs (∼88× and ∼8.4× lower than ResNet-50 and EfficientNet-B0, respectively) and ∼8.8× faster inference speed than EfficientNet-B0 on an embedded ARM processor. As such, the proposed deep learning-driven workflow, integrated with the aforementioned LightDefectNet neural network, is highly suited for high-throughput, high-performance light plate surface VQI within real-world manufacturing environments.
How to Cite
Xu, C., Famouri, M., Bathla, G., Shafiee, M. J., & Wong, A. (2023). High-Throughput, High-Performance Deep Learning-Driven Light Guide Plate Surface Visual Quality Inspection Tailored for Real-World Manufacturing Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15745-15751. https://doi.org/10.1609/aaai.v37i13.26869
IAAI Technical Track on emerging Applications of AI