You are here

Frequently asked questions

How can I register for ImageCLEF?

ImageCLEF has its own registration interface (here for 2016) for every year. Here you can choose an user name and a password. This registration interface is e.g. used for access to the data and the submission of runs. If you already have a login from previous ImageCLEF evaluation cycles you can migrate it to the recent ImageCLEF cycle here

Due to database restrictions it is necessary to sign an user agreement to get access to the data. Please print the document , sign it and follow the instructions of where to send it in the respective document.

I registered, but where do I find the login data for the databases?

Once we received your signed copyright form, we change the status of your account in the registration interface to valid. Then you can access the usernames and passwords for all collections, you registered for. These data can be found by selecting the "Collections" Link on the left in the registration forum and choosing the "Details" of the task you are interested in.

Can I obtain the databases of ImageCLEF without registering?

In general, it is not possible to obtain the databases without registering as the copyright owners often ask us to do so. Still, some datasets were made available publicly already without the need to register. It does not depend on us but rather on the copyright owners, although we prefer of course a high participation at the workshop and many submissions. You can ask task organisers for more details on their collections.

Datasets that can be obtained independently:

What are my obligations when I register?

The main obligations are described in the forms that you sign. Most of it concerns the handling and use terms for the datasets. The datasets are not your property but you are allowed to use the in the ImageCLEF context and partly also for your research afterwards.

What happens if I am not able to submit results after registration?

There is no penalty for registering without submitting, so there is nothing to worry about. Still, you can only get the most from your participation when you are registered and can compare your research with other people.

Can I register for ImageCLEF at any time of the year?

Similar to TREC and other benchmarking events, CLEF has a yearly cycle of making available registration, data, tasks, and then evaluating the results. In principal you can register between the moments when registration opens in November up until the submission deadline for results some time in April. You are of course better off to register as early as possible as this gives you the possibility to better work on your submission.

Can I participate at the workshop without having submitted results?

Yes, everybody is free to participate at the CLEF workshop or more particular the ImageCLEF sessions. Also people who were not registered to submissions or groups that did not submit anything can register.

How many different tasks are there at ImageCLEF?

The number of tasks may change every year. It depends in general on the interest of research groups on a variety of tasks proposed the preceding year, on the availability of datasets and on the motivation of someone to run a separate task (which is a lot of work!). Detailed information on the number of tasks and the tasks themselves will be published timely on http://www.imageclef.org.

Can I reuse the image databases for other research after the benchmark is over?

This depends mainly on the copyright constraints of the databases. Often it is indeed possible to reuse the datasets after the competition, when the sources are correctly cited (usually it is mandatory to sign the overview papers of the corresponding task and year). In case this is not clear, please contact us and we can tell you the exact terms of the database that you are planning to use.

How is the selection of the oral presentations at the workshop done?

We select a small number of oral presentations at the workshop based on several criteria. We would like to have some of the better submissions give the possibility to present their techniques. Still, we would also like to give the possibility to techniques that seem interesting and innovative to be presented. The organization committee takes the final decisions. Still, all participants at the workshop have a possibility to present their work at a poster.

How important is it to win ImageCLEF?

As is often stated in other competitions (or better: coopetitions) such as TREC, the main goal is not to win the competition but compare techniques based on the same data, so everyone can learn from the results. Everyone who has been to a workshop can tell that the discussions at the posters are always very vivid and many approaches that work or do not are discussed. It is a chance for everyone to learn from each other.

What is the difference between automatic, manual and feedback submissions?

Automatic submission means that your system is not taking into account any information that is based on the used database nor is a user interacting with the system to improve results. When you modify, for example weight of feature based on learning from the collection it is already a manual run. Query modifications for the retrieval are also manual submissions. If based on first results a real user supplies feedback to the system to refine the queries, then it should be labeled a feedback run.

How can I make my collection available via ImageCLEF?

Please contact us organizers if you have (preferably large) collections that could be made available for CLEF. We are always looking for collections with a copyright that permits us to distribute it among participants. Of course the content of the collection needs to fit into our strategy for it to be useful. Some databases might not be used directly but a year or two later.

How are the articles of the ImageCLEF workshop published?

The publication modalities for ImageCLEF can change from one year to another. At the conference, working notes are made available to the participants as part of the CEUR-WS series that are very well indexed. The conference proceedings are usually printed in the Springer Lecture Notes in Computer Science after a review process of the papers, only the overview papers and submissions to the conference are printed in these.

How are the tasks for the various ImageCLEF tracks developed?

The tasks of ImageCLEF have been developed over a fairly long time and the method for creating them is often different from one year to another. It is adapted to the propositions of the participants to be really helpful in research. In general we try to create a realistic user model and then create tasks for this user. Often surveys among professionals (medical, library, etc) are use or log files of systems such as demonstrations on the web. Propositions for creating realistic tasks are also always welcome.

Can I propose my own tasks for a future version of ImageCLEF?

Yes, you can of course propose new tasks and/or databases or other resources to us, particularly if you are willing to spend some work on this and help organizing it (which really is a lot of work). We can never guarantee to accept your proposition but the organization committee discusses the possible new tasks every year. It is important to not run the same recipe every year but to evolve with research interests in the field.

What are the main objectives of ImageCLEF?

The main objective of ImageCLEF is to advance the field of image retrieval and offer evaluation in various fields of image information retrieval. We particularly identified the mixed use of text, visual and other features towards multi-modality, which is the current state of the art.

Where are the differences between ImageCLEF and other initiatives such as TREC, TRECVID, ImageEval, MediaEval or the Benchathlon?

There are quite a few differences between all these evaluation campaigns but also a few similarities. TREC is surely the role model for CLEF and ImageCLEF with respect to organization. Still, TREC concentrates on text where as ImageCLEF deals with still images plus annotations for these images. Most annotations are also in several languages. The Benchathlon was more a forum to create evaluation resources but no real evaluation ever took place, although many papers were published in this context and also a database created. TRECVID is rather using videos instead of images as the target for retrieval but otherwise there are a few similarities as single shots of the videos can be regarded as images for retrieval. The spoken language can also be translated into text and is available to participants, so multi-modal retrieval and since recently multi-lingual retrieval are covered. We work together with the TRECVID organizers to avoid too much overhead. We are also having other user models such as medical practitioners or home photo users, whereas TRECVID rather has a BBC journalist as user model. ImageEval has a variety of interesting tasks but is currently fairly concentrated on the French research domain, although it is accessible to other researchers as well. Many tasks are rather trying to test invariance of systems or stability with respect to image changes rather than real-world tasks based on identified needs of users of retrieval systems. Again, we are trying to stay in contact with the organizers to coordinate efforts. MediaEval is a benchmarking initiative dedicated to evaluating new algorithms for multimedia access and retrieval. It emphasizes the 'multi' in multimedia and focuses on human and social aspects of multimedia tasks. MediaEval started as VideoCLEF in 2008 and then in 2010 became independent. MediaEval is now coordinated with ImageCLEF and in 2017 the two benchmarks' workshops will be held together in Dublin.

How is ImageCLEF financed?

Financing is always a difficult question when it comes to benchmarks and evaluation campaigns. Most of us work without any financial aid, mainly because ImageCLEF is very interesting to work on. CLEF and ImageCLEF have had small funding in the past but currently no project funding is available for the organisation.