Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).
Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):
|Dimension ||Available Codes |
|Annotation language ||EN, DE, FR, EN+DE, EN+FR, FR+DE, EN+FR+DE |
|Comment||YES, NO |
|Topic language|| EN, DE, FR, EN+DE, EN+FR, FR+DE, EN+FR+DE |
|Run type || AUTO, MAN|
|Feedback/expansion || FB, QE, FBQE, NOFB|
|Retrieval type (Modality) || IMG, TXT, TXTIMG |
|Topic field || TITLE, IMG_Q, TITLEIMG_Q |
Used to specify the target language (i.e., the annotation set) used for the run: English (EN), German (DE), French (FR) and their combinations.
Used to specify whether the <comment> field in the text annotation has been used: Yes/No.
Used to specify the query language used in the run: English (EN), German (DE), French (FR) and their combinations.
We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. A nice description on the differences between these types of runs is provided by TRECVID at here
Feedback or Query Expansion:
Used to specify whether the run involves query expansion (QE) or feedback (FB) techniques, both of them (QEFB) or none of them (NOFB).
Retrieval type (Modality):
This describes the use of visual (image) or text features in your submission. A text-only run will have modality textual (TXT) and a purely visual run will have modality visual (IMG). Combined submissions (e.g., an initial text search followed by a possibly combined visual search) will have as modality: text+visual (TXTIMG), also referred to as "mixed".
This specifies the topic fields employed in the run: only the title field of the topic (TITLE); only the example images in the topic (IMG_Q); both the title and image fields (TITLEIMG_Q).