DocEdit: Language-Guided Document Editing

Authors

  • Puneet Mathur University of Maryland, College Park
  • Rajiv Jain Adobe Research
  • Jiuxiang Gu Adobe Research
  • Franck Dernoncourt Adobe Research
  • Dinesh Manocha University of Maryland, College Park
  • Vlad I. Morariu Adobe Research

DOI:

https://doi.org/10.1609/aaai.v37i2.25282

Keywords:

CV: Language and Vision, CV: Applications

Abstract

Professional document editing tools require a certain level of expertise to perform complex edit operations. To make editing tools accessible to increasingly novice users, we investigate intelligent document assistant systems that can make or suggest edits based on a user's natural language request. Such a system should be able to understand the user's ambiguous requests and contextualize them to the visual cues and textual content found in a document image to edit localized unstructured text and structured layouts. To this end, we propose a new task of language-guided localized document editing, where the user provides a document and an open vocabulary editing request, and the intelligent system produces a command that can be used to automate edits in real-world document editing software. In support of this task, we curate the DocEdit dataset, a collection of approximately 28K instances of user edit requests over PDF and design templates along with their corresponding ground truth software executable commands. To our knowledge, this is the first dataset that provides a diverse mix of edit operations with direct and indirect references to the embedded text and visual objects such as paragraphs, lists, tables, etc. We also propose DocEditor, a Transformer-based localization-aware multimodal (textual, spatial, and visual) model that performs the new task. The model attends to both document objects and related text contents which may be referred to in a user edit request, generating a multimodal embedding that is used to predict an edit command and associated bounding box localizing it. Our proposed model empirically outperforms other baseline deep learning approaches by 15-18%, providing a strong starting point for future work.

Downloads

Published

2023-06-26

How to Cite

Mathur, P., Jain, R., Gu, J., Dernoncourt, F., Manocha, D., & Morariu, V. I. (2023). DocEdit: Language-Guided Document Editing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1914-1922. https://doi.org/10.1609/aaai.v37i2.25282

Issue

Section

AAAI Technical Track on Computer Vision II