You are here

Personal Photo Retrieval 2013 - Submission How-To

Handing in your Submissions

Participants are free to rely only on the provided QBE documents or to use data derived from the browsed documents as well. Please state on submission if you have used the browsing data. Participants are free to experiment with whatever methods they wish for image retrieval, e.g., relevance feedback or the integration of other modalities such as spatial or temporal information. We ask participants to indicate which of the following applies to each of their runs following the notation of 2011's Wikipedia run. Please note that you must not use the IPTC metadata fields that are present in the image documents.

You are expected to state the top-100 of your results for each topic. In order to assess the retrieval quality of different approaches, you can submit 5 different runs.

Run type (MAN/AUTO):
We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. A nice description on the differences between these types of runs is provided by TRECVID at here.
Relevance Feedback (BINARY/GRADED,NOFB)
When relevance feedback is used, please state if your relevance feedback mechanism is using positive or negative examples only (BINARY) or if it allows grades of relevance for input (GRADED). If no feedback was used indicate these runs by using NOFB.
Retrieval type (Modality):
This describes the use of visual (image) or other features in your submission. A purely visual run will have modality visual ("IMG"). Runs using metadata will have the modality "MET". If you use browsing data add "BRO". Combined submissions (e.g., a purely visual search using the browsing data) will have as modality:visual+browsing (IMGBRO), also referred to as "mixed". Please not that you must not use the IPTC metadata fields (see below). Thus, the following combinations are possible:

  • IMG (only visuals)
  • IMGMET (visuals + metadata)
  • IMGMETBRO (all modalities)
  • IMGBRO (visuals and browsing data)
  • MET (only metadata)
  • METBRO (only metadata with browsing data)
  • BRO (only browsing data)

A list that maps document paths to file IDs used for submission can be found here.
For the submission of your results, please use the official ImageCLEF submission system. To login you will have to use the username and password that you have received with your registration. When submitting a run (in total you can submit 5 runs), you will need to fill in the following form:

  1. Select the personal photo retrieval track called "ImageCLEFphoto-retrieval".
  2. In the "method description" field, you can outline the method you have used for the current run.
  3. Please choose "not applicable" for "retrieval type" because you will have to use the "other information" field for providing this information.
  4. "Language" is also "not applicable".
  5. Choose the "run type" as described above.
  6. If this is your best run, indicate this by clicking the "primary run" checkbox.
  7. In the "other information" section, you must indicate how the run has been created. Please follow the following scheme.
    1. The first line has to be BINARY,GRADED, or NOFB depending on the relevance feedback approach you have used.
    2. The second line should indicate the modalities you have use (see above), i.e. it should contain one of the following keywords: IMG, IMGMET, IMGMETBRO, IMGBRO, MET, METBRO, or BRO.
  8. If you have used additional resources (a web service from Bing or Google etc.), please state this.
  9. To finish your submission upload a runfile with the following format.

Submission File Format

The participants are requested to hand in their results in trec_eval format. After the publication of the results, the ground truths will be made available to the participants in trec_eval's QREL format for later usage. The submissions will be received through the ImageCLEF 2013 system using subtask "ImageCLEFphoto-retrieval".
The trec_eval format is as follows:

42 Q0 542 1 0.427824 YourRunID
42 Q0 560 2 0.379514 YourRunID
42 Q0 506 3 0.373361 YourRunID
42 Q0 555 4 0.367622 YourRunID
42 Q0 538 5 0.356699 YourRunID
42 Q0 527 6 0.337737 YourRunID
42 Q0 577 7 0.337599 YourRunID
42 Q0 524 8 0.33359 YourRunID
42 Q0 587 9 0.325482 YourRunID
42 Q0 174 10 0.320099 YourRunID
...

All fields are seperated by a tab (\t) and every line has the following semantics (from left to right):

  1. A topic ID which will be specified by the topics that have been released in March.
  2. This field will be ignored and we recommend to leave it as shown above.
  3. The file ID of the retrieved document out of an interval of [1,5555]. You can find the mapping of the file name to its ID in this file.
  4. The rank of the retrieved document. Please make sure that the documents are ordered ascendingly.
  5. The calculated similarity score of the retrieved document.
  6. An arbitrary identifier that will help you to recognize the used retrieval technique.

The participants will be permitted to submit up to 5 runs. Please submit your top 100 results for each topic.

Important notes

  • The topic IDs within the runfiles have to be ordered as well as the ranks.
  • Each file ID is only allowed once per topic.
AttachmentSize
Plain text icon File name to ID mappings109.1 KB