- ImageCLEF 2013
- ImageCLEF 2012
- ImageCLEF 2011
- ImageCLEF 2010
- ImageCLEF 2009
- ImageCLEF 2008
- ImageCLEF 2007
- ImageCLEF 2006
- ImageCLEF 2005
- ImageCLEF 2004
- ImageCLEF 2003
- Restricted Area
| ImageCLEF's wikipediaMM task provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate retrieval approaches in the context of a large and heterogeneous collection of images (similar to those encountered on the Web) that are searched for by users with diverse information needs.
In 2009, ImageCLEF wikipediaMM will use the same collection of Wikipedia images that was also used in 2008. This will be the last year this image collection is employed. It contains approximately 150,000 images that cover diverse topics of interest. These images are associated with unstructured and noisy textual annotations in English.
This is an ad-hoc image retrieval task; the evaluation scenario is thereby similar to the classic TREC ad-hoc retrieval task and the ImageCLEF photo retrieval task: simulation of the situation in which a system knows the set of documents to be searched, but cannot anticipate the particular topic that will be investigated (i.e. topics are not known to the system in advance). The goal of the simulation is: given a textual query (and/or sample images) describing a user's (multimedia) information need, find as many relevant images as possible from the Wikipedia image collection.
Any method can be used to retrieve relevant documents. We encourage the use of both concept-based and content-based retrieval methods and, in particular, multimodal approaches that investigate the combination of evidence from different modalities. To this end, we will provide a range of resources to support participants with expertise in different research domains.
*** IMPORTANT NOTE ***
The wikipediaMM task encourages participants to create the topics and perform the relevance assessments themselves. This is similar to the user model followed in INEX, with the difference that we do not require participants to get involved in that process. It is an optional step that allows the participants to share in the creation of the test collection.
Therefore, we encourage each group taking part in ImageCLEF's wikipediaMM task to:
Our experience indicates that the creation of topics does not require much effort, whereas the assessments usually take around less than 1 working day per topic. This procedure is also reflected in the schedule of the task (see below).
|Data: Images & Metadata|
The (INEX MM) wikipedia image collection consists of approximately 150,000 wikipedia images (in JPEG and PNG formats) provided by wikipedia users. Each image is associated with user-generated alphanumeric, unstructured metadata in English. These metadata usually contain a brief caption or description of the image, the Wikipedia user who uploaded the image, and the copyright information. These descriptions are highly heterogeneous and of varying length. The figure below provides an example image and its associated metadata.
T. Westerveld and R. van Zwol. The INEX 2006 Multimedia Track. In N. Fuhr, M. Lalmas, and A. Trotman, editors, Advances in XML Information Retrieval:Fifth International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2006, Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (LNCS/LNAI). Springer-Verlag, 2007.
DOWNLOAD (participants only)
To aid you in your retrieval experiments (and in the topic development process, should you choose to participate), we provide additional resources in the form of baseline retrieval systems on the wikipediaMM data.
The topics for the 2009 ImageCLEF wikipediaMM task will include (i) topics based on analysis from an image search engine, and (ii) topics created by this year's task participants.
The characteristics of the (INEX MM) wikipedia image collection allow for the investigation of the following objectives:
In the context of INEX MM 2006-2007, mainly text-based retrieval approaches have been examined. Here, we hope to attract more visually-oriented approaches and most importantly, multi modal approaches that investigate the combination of evidence from different modalities. The results of WikipediaMM at ImageCLEF 2008 showed that multimedia retrieval approaches outperformed for certain topics the text-based approaches, but globally the retrieval based on text remains unbeaten. The retrieval of multimedia documents will stay in the focus of attention for 2009.
Experiments are performed as follows: the participants are given topics, these are used to create a query which is used to perform retrieval on the image collection. This process iterates (e.g. maybe involving relevance feedback) until they are satisfied with their runs. Participants might try different methods to increase the number of relevant in the top N rank positions (e.g., query expansion).
Participants are free to experiment with whatever methods they wish for image retrieval, e.g., query expansion based on thesaurus lookup or relevance feedback, indexing and retrieval on only part of the image caption, different models of retrieval, and combining text and content-based methods for retrieval. Given the many different possible approaches which could be used to perform the ad-hoc retrieval, rather than list all of these we will ask participants to indicate which of the following applies to each of their runs (we consider these the "main" dimensions which define the query for this ad-hoc task):
Feedback or Query Expansion:
Participants can submit as many system runs as they require via the ImageCLEF wikipediaMM submission system
Participants are required to submit ranked lists of (up to) the top 1000 images ranked in descending order of similarity (i.e. the highest nearer the top of the list). The format of submissions for this ad-hoc task is the TREC format. It can be found here.
The filenames of the submitted runs should distinguish different types of submission. The different types of possible submissions are described in the table below. It is extremely important that we can get a detailed description of the techniques used for each submitted run. Participants can submit a run in any of the permutations detailed in the previous table (above) , e.g., EN-EN-AUTO-NOFB-TXT-TITLE for the English-English monolingual run based on fully automatic text-based retrieval methods that uses the title topic field.
When the topic contains an image example that is part of the wikipediaMM collection, this image should not be part of the retrieval results, i.e., we are seeking relevant images that the users are not familiar with (as they are with the images they provided as examples).
Please note that there should be at least 1 document entry in your results for each topic (i.e. if your system returns no results for a query then insert a dummy entry, e.g. 25 1 16019 0 4238 xyzT10af5 ). The reason for this is to make sure that all systems are compared with the same number of topics and relevant documents. Submissions not following the required format will not be evaluated.
| Assessors can perform their work starting from the wikipediaMM assessment page.
The page contains an explanation of the assessment system and contains links to the pools for the different groups. To access and assess the pools, you need your username and password (emailed by the wikipediaMM organisers).
DOWNLOAD (participants only)
The schedule can be found here: