You are here

ImageCLEFdrawnUI

Motivation

Description

Building websites requires a very specific set of skills. Currently, the two main ways to achieve this is either by using a visual website builder or by programming. Both approaches have a steep learning curve. Enabling people to create websites by drawing them on a whiteboard or on a piece of paper would make the webpage building process more accessible.

A first step in capturing the intent expressed by a user through a wireframe is to correctly detect a set of atomic user interface elements (UI) in their drawings. The bounding boxes and labels resulted from this detection step can then be used to accurately generate a website layout using various heuristics.

In this context, the detection and recognition of hand drawn website UIs task addresses the problem of automatically recognizing the hand drawn objects representing website UIs, which are further used to be translated automatically into website code.

News

  • 30.10.2019: Website goes live

Preliminary Schedule

... coming soon ...

Task description

Given a set of images of hand drawn UIs, participants are required to develop machine learning techniques that are able to predict the exact position and type of UI elements.

Data

The provided data set consists of 3,000 hand drawn images inspired from mobile application screenshots and actual web pages containing 1,000 different templates. Each image comes with the manual labeling of the positions of the bounding boxes corresponding to each UI element and its type. To avoid any ambiguity, a predefined shape dictionary with 21 classes is used, e.g., paragraph, label, header. The development set contains 2,000 images while the test set contains 1,000 images.

Evaluation methodology

The performance of the algorithms will be evaluated using the standard mean Average Precision over IoU .5, commonly used in object detection.

Participant registration

... coming soon ...

Submission instructions

Will be available closer to the competition end.

CEUR Working Notes

Will be available closer to the competition end.

Citations

Will be available closer to the competition end.

Organizers

  • Paul Brie <paul.brie(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Dimitri Fichou <dimitri.fichou(at)teleporthq.io>, teleportHQ, Cluj Napoca, Romania
  • Mihai Dogariu, University Politehnica of Bucharest, Romania
  • Liviu Daniel Ștefan, University Politehnica of Bucharest, Romania
  • Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
  • Bogdan Ionescu, University Politehnica of Bucharest, Romania
AttachmentSize
Image icon sample.jpg310.35 KB