A Black-Box Watermarking Modulation for Object Detection Models
DOI:
https://doi.org/10.1609/aaaiss.v4i1.31772Abstract
Training a Deep Neural Network (DNN) from scratch comes with a substantial cost in terms of money, energy, data, and hardware. When such models are misused or redistributed without authorisation, the owner faces significant financial and intellectual property (IP) losses. Therefore, there is a pressing need to protect the IP of Machine Learning models to avoid these issues. ML watermarking emerges as a promising solution for model traceability. Watermarking has been well-studied for image classification models, but there is a significant research gap in its application to other tasks like object detection, for which no effective methods have been proposed yet. In this paper, we introduce a novel black-box watermarking method for object detection models. Our contributions include a watermarking technique that maps visual information to text semantics and a comparative study of fine-tuning techniques’ impact on watermark detectability. We present the model’s detection performance and evaluate fine-tuning strategies’ effectiveness in preserving watermark integrity.Downloads
Published
2024-11-08
How to Cite
Lansari, M., Mattioli, L., Addad, B., Raffi, P.-M., Kapusta, K., Gonzalez, M., & Ibn Khedher, M. (2024). A Black-Box Watermarking Modulation for Object Detection Models. Proceedings of the AAAI Symposium Series, 4(1), 60-67. https://doi.org/10.1609/aaaiss.v4i1.31772
Issue
Section
AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC)