You are here

ImageCLEF 2009 wikipediaMM task

Introduction
ImageCLEF's wikipediaMM task provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate retrieval approaches in the context of a large and heterogeneous collection of images (similar to those encountered on the Web) that are searched for by users with diverse information needs.

In 2009, ImageCLEF wikipediaMM will use the same collection of Wikipedia images that was also used in 2008. This will be the last year this image collection is employed. It contains approximately 150,000 images that cover diverse topics of interest. These images are associated with unstructured and noisy textual annotations in English.

This is an ad-hoc image retrieval task; the evaluation scenario is thereby similar to the classic TREC ad-hoc retrieval task and the ImageCLEF photo retrieval task: simulation of the situation in which a system knows the set of documents to be searched, but cannot anticipate the particular topic that will be investigated (i.e. topics are not known to the system in advance). The goal of the simulation is: given a textual query (and/or sample images) describing a user's (multimedia) information need, find as many relevant images as possible from the Wikipedia image collection.

Any method can be used to retrieve relevant documents. We encourage the use of both concept-based and content-based retrieval methods and, in particular, multimodal approaches that investigate the combination of evidence from different modalities. To this end, we will provide a range of resources to support participants with expertise in different research domains.

*** IMPORTANT NOTE ***

The wikipediaMM task encourages participants to create the topics and perform the relevance assessments themselves. This is similar to the user model followed in INEX, with the difference that we do not require participants to get involved in that process. It is an optional step that allows the participants to share in the creation of the test collection.

Therefore, we encourage each group taking part in ImageCLEF's wikipediaMM task to:

  • create topics
  • perform the relevance assessments on the created topics

Our experience indicates that the creation of topics does not require much effort, whereas the assessments usually take around less than 1 working day per topic. This procedure is also reflected in the schedule of the task (see below).

Data: Images & Metadata


The (INEX MM) wikipedia image collection consists of approximately 150,000 wikipedia images (in JPEG and PNG formats) provided by wikipedia users. Each image is associated with user-generated alphanumeric, unstructured metadata in English. These metadata usually contain a brief caption or description of the image, the Wikipedia user who uploaded the image, and the copyright information. These descriptions are highly heterogeneous and of varying length. The figure below provides an example image and its associated metadata.




anne frank house



Further information about the image collection can be found in:

T. Westerveld and R. van Zwol. The INEX 2006 Multimedia Track. In N. Fuhr, M. Lalmas, and A. Trotman, editors, Advances in XML Information Retrieval:Fifth International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006, Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (LNCS/LNAI). Springer-Verlag, 2007.

DOWNLOAD (participants only)
  • The wikipediaMM image collection (151,519 .jpeg and .png images - 14GB) can be downloaded in small batches: HERE
  • The metadata of the images in wikipediaMM image collection can be downloaded: HERE
  • Additional information on the wikipediaMM collection can be downloaded HERE. This contains:
    • A README-wikipediaMM file describing the provided data.
    • A imagesIDs.txt file listing all image identifiers.
    • A imagefile2metadatafile.txt file listing the correspondence between image and metadata files.
Resources
To aid you in your retrieval experiments (and in the topic development process, should you choose to participate), we provide additional resources in the form of baseline retrieval systems on the wikipediaMM data.
  • A text-based retrieval system powered by PF/Tijah is provided here *** NO LONGER AVAILABLE ***
    • Place the mouse over a retrieved image (thumbnail) to display its metadata. Click a retrieved image (thumbnail) to view the full-size image
    • The retrieval model used is a unigram language model (multinomial distribution model), where the maximum likelihood estimate is smoothed by interpolating it with a background language model estimated using the entire collection. Smoothing parameter is set to 0.8. Stemming has been applied. More details on the applied language modelling approach can be found:

      Djoerd Hiemstra. Using Language Models for Information Retrieval. 2001. (CTIT Ph.D thesis series, ISSN 1381-3617 ; no. 01-32)
    • The top 1000 results are also provided in a simple format : rank imageID score.

  • The similarity matrix for the images in the collection has been constructed by the IMEDIA group at INRIA . It can be downloaded from here. *** NO LONGER AVAILABLE ***


    It contains:
    • For each image in the collection, the list of the top K=1000 most similar images in the collection together with their similarity scores. Note that the ids in this matrix are different to the ids of the images in the collection
    • For each image in the topics, the list of the top K=1000 most similar images in the collection together with their similarity scores
    • The similarity scores are actually based on the distance between images; therefore, the lower the score, the more similar the images.
    • Details on the features and distance metric used can be found in:

      Marin Ferecatu Image retrieval with active relevance feedback using both visual and keyword-based descriptors, Ph.D. Thesis, Université de Versailles, France

    • Image classification scores: 
      • For each image, the classification scores for the 101 different MediaMill concepts are provided by University of Amsterdam (UvA). The UvA classifier is trained on manually annotated TRECVID video data and the concepts are selected for the broadcast news domain. More details can be found in:

        C. G. M. Snoek, M. Worring, J. C. van Gemert, J.-M. Geusebroek, and A. W. M. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In MULTIMEDIA ’06: Proceedings of the 14th annual ACM international conference on Multimedia, pages 421–430, New York, NY, USA, 2006. ACM Press.

        • The list of the 101 MediaMill concepts can be found here.
        • The classification scores of (most of) the wikipediaMM images for the MediaMill concepts can be found here.
      • Image features: For each image, the set of the 120D feature vectors that has been used to derive the above image classification scores is available. More details can be found in:
        J. C. v. Gemert, J.-M. Geusebroek, C. J. Veenman, C. G. M. Snoek, and A. W. M. Smeulders. Robust scene categorization by learning image statistics in context. In CVPRW ’06: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop, page 105, Washington, DC, USA, 2006. IEEE Computer Society.
        • The feature vectors of (most of) the wikipediaMM images can be found here.
Topics

The topics for the 2009 ImageCLEF wikipediaMM task will include (i) topics based on analysis from an image search engine, and (ii) topics created by this year's task participants.




As an innovation this year, we do not require participants to get involved in the topic development process. It is an optional step that allows the participants to share in the creation of the test collection. If you wish to participate in the topic creation, please send us your proposals until 31.3.2009




DOWNLOAD (participants only)

  • The guidelines for the topic development by the participants are available here.
  • The Candidate Topic Submission Form is available here .
  • The 2009 topics:
    • The topics (title + example images) can be found here.
    • The example images in the topics that are not part of the collection can be downloaded (i) with their original filenames from here or (ii) with their filenames changed to the topic number from here .




    The topics are multimedia queries that can consist of a textual and a visual part, with the latter being optional. Concepts that might be needed to constrain the results should be added to the title field. An example topic in the appropriate format is the following:


    <topic>

      <number> 1 </number>

      <title> cities by night <title>

      

      <narrative> I am decorating my flat and as I like photos of cities at night, I would like to find some that I could possibly print into posters. I would like to find photos of skylines or photos that contain parts of a city at night (including streets and buildings).Photos of cities (or the earth) from space are not relevant. </narrative>

    </topic>



    Therefore, the topics include the following fields:

    • title: query by keywords
    • image query by one or more images (optional)
    • narrative description of the information need where the definitive definition of relevance and irrelevance are given



    DOWNLOAD (participants only)

    • We provide the topics and relevance assessments from the wikipediaMM task at ImageCLEF 2008.
      • The topics can be found here. Note that the topics in 2008 contained an additional field: the concept field that specified visual concepts.
      • The example images in the topics that are not part of the collection can be downloaded from here.
      • The relevance assessments in TREC format can be found here.
Evaluation Objectives


The characteristics of the (INEX MM) wikipedia image collection allow for the investigation of the following objectives:

  • how well do the retrieval approaches cope with larger scale image collections?
  • how well do the retrieval approaches cope with noisy and unstructured textual annotations?
  • how well do the content-based retrieval approaches cope with images that cover diverse topics and are of varying quality?
  • how well can systems exploit and combine different modalities given a user's multimedia information need? Can they outperform mono modal approaches like query-by-text, query-by-concept or query-by-image?

In the context of INEX MM 2006-2007, mainly text-based retrieval approaches have been examined. Here, we hope to attract more visually-oriented approaches and most importantly, multi modal approaches that investigate the combination of evidence from different modalities. The results of WikipediaMM at ImageCLEF 2008 showed that multimedia retrieval approaches outperformed for certain topics the text-based approaches, but globally the retrieval based on text remains unbeaten. The retrieval of multimedia documents will stay in the focus of attention for 2009.








Retrieval Experiments

Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).

Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we will ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):









Dimension Available Codes
Topic language EN
Annotation language EN
Query/run type AUTO, MAN
Feedback/expansion FB, QE, FBQE, NOFB
Modality IMG, TXT, CON, TXTIMG, TXTCON, IMGCON, TXTIMGCON
Topic field TITLE, IMG_Q, TITLEIMG_Q

Query language:
Used to specify the query language used in the run. Only English queries will be provided this year, so the language code indicating the query language should be English (EN).

Annotation language:
Used to specify the target language (i.e., the annotation set) used for the run. Only English annotation will be provided this year, so the language code indicating the target language should be English (EN).

Query/run type:
We distinguish between manual (MAN) and automatic (AUTO) submissions. Automatic runs will involve no user interaction; whereby manual runs are those in which a human has been involved in query construction and the iterative retrieval process, e.g. manual relevance feedback is performed. A nice description on the differences between these types of runs is provided by TRECVID at here

Feedback or Query Expansion:
Used to specify whether the run involves query expansion (QE) or feedback (FB) techniques, both of them (QEFB) or none of them (NOFB).

Modality:
This describes the use of visual (image), text features or concepts in your submission. A text-only run will have modality text (TXT), a concept-only run will have a modality concept (CON), and a purely visual run will have modality image (IMG). Combined submissions (e.g., an initial text search followed by a possibly combined visual search) will have as modality any combination thereof: text+image (TXTIMG), text+concept (TXTCON), image+concept (IMGCON), text+image+concept (TXTIMGCON).

Query field:
This specifies the topic fields employed in the run: only the title field of the topic (TITLE); only the example images in the topic (IMG_Q); both the title and image fields (TITLEIMG_Q).

Submissions

Participants can submit as many system runs as they require via the ImageCLEF wikipediaMM submission system

Participants are required to submit ranked lists of (up to) the top 1000 images ranked in descending order of similarity (i.e. the highest nearer the top of the list). The format of submissions for this ad-hoc task is the TREC format. It can be found here.

The filenames of the submitted runs should distinguish different types of submission. The different types of possible submissions are described in the table below. It is extremely important that we can get a detailed description of the techniques used for each submitted run. Participants can submit a run in any of the permutations detailed in the previous table (above) , e.g., EN-EN-AUTO-NOFB-TXT-TITLE for the English-English monolingual run based on fully automatic text-based retrieval methods that uses the title topic field.

When the topic contains an image example that is part of the wikipediaMM collection, this image should not be part of the retrieval results, i.e., we are seeking relevant images that the users are not familiar with (as they are with the images they provided as examples).

Please note that there should be at least 1 document entry in your results for each topic (i.e. if your system returns no results for a query then insert a dummy entry, e.g. 25 1 16019 0 4238 xyzT10af5 ). The reason for this is to make sure that all systems are compared with the same number of topics and relevant documents. Submissions not following the required format will not be evaluated.

Relevance Assessments
Assessors can perform their work starting from the wikipediaMM assessment page.

The page contains an explanation of the assessment system and contains links to the pools for the different groups. To access and assess the pools, you need your username and password (emailed by the wikipediaMM organisers).


DOWNLOAD (participants only)

  • The relevance assessments for the 45 topics can be found here. They are in TREC format.
Schedule
The schedule can be found here:

  • 1.2.2009: registration opens for all CLEF tracks
  • 9.3.2009: instructions and formatting criteria for candidate topics/queries provided to participants
  • 15.3.2009: data release (images + metadata)
  • 17.4.2009: submission of candidate topics (optional)
  • 27.4.2009: topic release
  • 1.5.2009: registration closes for all CLEF tracks
  • 15.6.2009: submission of runs
  • 30.6.2009: distribution of merged results to volunteers for relevance assessments
  • 24.7.2009: submission deadline for relevance assessments
  • 27.7.2009: release of results
  • 23.8.2009: submission of working notes papers
  • 30.9-2.10.2009: CLEF workshop in Corfu, Greece
Organisers