DocVQA: Challenge 2020

Introduction

First edition of the DocVQA challenge was organized in the context of CVPR 2020 workshop on Text and Documents in the Deep Learning Era

The challenge is hosted at Robust Reading Challenge (RRC) platform. The challenge comprises two tasks.


Task 1 - VQA on Document Images


A typical VQA task, where natural language questions are defined over single document images, and an answer needs to be generated by interpreting the image.

Evaluation Metirc

We will be using Average Normalized Levenshtein Similarity (ANLS) as the evaluation metric. For more details on the metric please see the metric used for Task 3 for scene text VQA challenge .

Answers are not case sensitive

Answers are space sensitive

Answers or tokens comprising answers are not limited to a fixed size dictionary. It could be any word/token which is present in the document.


Task 2 - VQA on Document Images Collection


A retrieval-style task where given a question, the aim is to identify and retrieve all the documents in a large document collection that are relevant to answering this question.

Evaluation Metric

The methods will be ranked according to the correctness of the evidences provided evaluated through the * Mean Average Precision (MAP)*. If the submission contains the answers to the questions, it will be also evaluated and the precision and recall metrics will be provided. However, these metrics will not be used to rank the methods in the competition.

More details of the tasks can be found under the Tasks tab of the comeptition page in RRC portal

Note: Although the challenge was organized as part of the workshop in CVPR 2020, the challenge is open to submissions post the challenge period. In the leaderboard on the RRC platform, we use different color to highlight the challenge entries.


Winners of the 2020 Challenge

Below are the winners of the 2020 edition of the DocVQA challenge. The first prize winners under each task were awarded a cash prize of USD 1000, sponsored by Amazon AWS.

Task 1 Winners

  • Winner - PingAn-OneConnect-Gammalab-DQA team of OneConnect GammaLab

    • Team - Han Qiu, Guoqiang Xu, Chenjie Cao, Chao Gao, Dexun Wang, Fengxin Yang, Xiao Xie, Yu Qiu and Yu Qiu

  • Runner up - Structural LM team from DAMO NLP

    • Team - Chenliang Li, Bin Bi, Ming Yan, Wei Wang and Songfang Huang

Task 2 Winners

  • Winner - PingAn-OneConnect-Gammalab-DQA team of OneConnect GammaLab

    • Team - Han Qiu, Guoqiang Xu, Chenjie Cao, Chao Gao, Dexun Wang, Fengxin Yang, Xiao Xie, Yu Qiu and Yu Qiu

  • Runner up - iFLYTEK-DOCR of iFlytek

    • Team - Chenyu Liu, Fengren Wang, Jiajia Wu, Jinshui Hu, Bing Yin, Cong Liu