You are here

ImageCLEFdrawnUI

Motivation

Description
​The increasing importance of User Interfaces (UIs) for companies highlights the need for novel ways of creating them. Currently, this activity can be slow and error prone due to the constant communication between the specialists involved in this field, e.g., designers and developers. The use of machine learning and automation could speed up this process and ease access to the digital space for companies who could not afford it with today’s tools. A first step to build a bridge between developers and designers is to infer the intent from a hand drawn UI (wireframe) or from a web screenshot. This is done by detecting atomic UI elements, such as images, paragraphs, containers or buttons.

In this edition, two tasks are proposed to the participants, both requiring them to detect rectangular bounding boxes corresponding to the UI elements from the images. The first task, wireframes annotation, is a continuation of the previous edition, where about 1,300 more wireframes are added to the existing 3,000 images of the previous data set. These new images will contain a bigger proportion of the rare classes to tackle the long tail problem found in the previous edition. For the second task we present the new challenge of screenshot annotation, where 9,276 screenshots of real websites were compiled into a data set by utilizing an in-house parser. Due to the nature of the web, the data set is noisy, e.g., some of the annotations correspond to invisible elements, while other elements have missing annotations. To deal with this dirty dataset, part of the images will be cleaned manually. The development set will contain 6,555 images uncleaned and 903 images cleaned and the test set will contain 1,818 clean images.
​​

News

​​

Preliminary Schedule

  • 16.11.2020: registration opens for all ImageCLEF tasks
  • 25.01.2021: development data release starts
  • 15.03.2021: test data release starts
  • 07.05.2021: deadline for submitting the participants runs
  • 14.05.2021: release of the processed results by the task organizers
  • 28.05.2021: deadline for submission of working notes papers by the participants
  • 11.06.2021: notification of acceptance of the working notes papers
  • 02.07.2021: camera ready working notes papers
  • 21-24.09.2021: CLEF 2021, Bucharest, Romania

Task description


Given a set of images of hand drawn UIs or webpages screenshots, participants are required to develop machine learning techniques that are able to predict the exact position and type of UI elements.

Data

Wireframe Task


The provided data set consists of 4,291 hand drawn images inspired from mobile application screenshots and actual web pages. Each image comes with the manual labeling of the positions of the bounding boxes corresponding to each UI element and its type. To avoid any ambiguity, a predefined shape dictionary with 21 classes is used, e.g., paragraph, label, header. The development set contains 3,218 images while the test set contains 1,073 images.

Comparison with last year dataset:

  • All images from last year develpment set are still present in this year development set
  • The test contains only new images.
  • Additionnal images had been selected to rebalance the classes as much as possible.

Screenshots Task


The provided data set consists of 9,630 screenshots of sections and full pages from high quality websites gathered using an in-house parser. Each image comes with the manual labeling of the positions of the bounding boxes corresponding to each UI element and its type. To avoid any ambiguity, a predefined shape dictionary with 6 classes is used, e.g., TEXT, IMAGE, BUTTON.

The development set contains 7,770 images with 6,840 noisy screenshots in the train set and 930 manually curated screenshots in the evaluation set. The test set contains 1,860 screenshots, also manually cleaned.

Evaluation methodology


The performance of the algorithms will be evaluated using the standard Mean Average Precision over IoU 0.50 and recall over IoU 0.50.

The evaluation script will be added when the submission phase will start.
​​

Participant registration

Please refer to the general ImageCLEF registration instructions

Submission instructions


The submissions will be received through the AIcrowd system.
Participants will be permitted to submit up to 10 runs. External training data is not allowed.

The Wireframe task can be found here.

The Screenshot task can be found here.

Results

Wireframe Task


More information will be added soon!

Screenshots Task


More information will be added soon!

CEUR Working Notes

  • All participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper.
  • More information will be added soon!

Citations


More information will be added soon!

Organizers

  • Dimitri Fichou <dimitri.fichou(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Raul Berari <raul.berari(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Andrei Tauteanu <andrei.tauteanu(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Paul Brie <paul.brie(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Mihai Dogariu, University Politehnica of Bucharest, Romania
  • Liviu Daniel Ștefan, University Politehnica of Bucharest, Romania
  • Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
  • Bogdan Ionescu, University Politehnica of Bucharest, Romania

Acknowledgements