You are here

Domain adaptation

Welcome to the website of the Domain Adaptation challenge!


Schedule




Task overview

The amount of freely available and annotated image collections is dramatically increased over the last years, thanks to the diffusion of high-quality cameras, and also to the introduction of new and cheap annotation tools such as Mechanical Turk. Attempts to leverage over and across such large data sources has proved challenging. Indeed, tools like Google Goggle are able to recognize reliably limited classes of objects, like books or wine labels, but are not able to generalize across generic objects like food items, clothing items and so on. This problem is known in the literature as the domain adaptation challenge. Addressing this issue would have a tremendous impact on the generality and adaptability of any vision-based annotation system. Current research in domain adaptation focuses on a scenario where (a) the prior domain (source) consists of one or maximum two databases (b) the labels between the source and the target domain are the same, and (c) the number of annotated training data for the target domain are limited. The goal of this challenge is to push the state of the art towards more realistic settings, relaxing these assumptions. In the 2014 edition, we will focus on the assumption (a), with the number of source data will range from 2 up to at least 5. Participants will be asked to build recognition systems for the target classes by leveraging over the source knowledge. Source data will be provided by exploiting existing available resources like the ImageNet, Caltetch256, AwA databases and so on. Target data will be collected from the Web, using existing tools like Google Image or Bing. Performance will be measured in terms of accuracy.




Registering for the task and accessing the data

To participate in this task, please register by following the instructions found in the main ImageCLEF 2014 webpage.




Organizers

  • Barbara Caputo, University of Rome La Sapienza, caputo@dis.uniroma1.it
  • Novi Patricia, Idiap Research Institute, novi.patricia@idiap.ch

Contact

For any doubt related to the task, please send email to: novi.patricia@idiap.ch




Result

Final Result Domain Adaptation Challenge
#Group Name SCORE TOTAL
1 XRCE 228
2 Hubert Curien lab group 158
3 IDIAP 45

XRCE

Score per Class
# Score Class  
1 41  
2 12  
3 15  
4 18  
5 20  
6 23  
7 17  
8 8  
9 17  
10 28  
11 12  
12 17  

Hubert Curien lab group

Score per Class
# Score Class  
1 36  
2 7  
3 15  
4 5  
5 25  
6 10  
7 13  
8 8  
9 6  
10 15  
11 7  
12 11  

IDIAP

Score per Class
# Score Class  
1 3  
2 1  
3 0  
4 4  
5 3  
6 6  
7 7  
8 3  
9 2  
10 3  
11 3  
12 10  

Runs
#Group NameScore TotalRun name
1XRCE2281399306364121__combin6_Np20_div108
2XRCE2281398937484692__combin3_Np18_div108
3XRCE2261398938027662__combinAll6_Np19_div164
4XRCE2171399306281302__combin6A_Np19_div78
5XRCE2141399298355362__MLNCM_MLDA_128_it200_e0.1_Topk1_NCMC5_bigloop_eqw_MV_p025
6XRCE2121399305916296__combinAll7A_Np19_div134
7XRCE2081399306386958__combin8A_Random_Np25_div78
8XRCE1851399298318039__MLNCMC_ML_128_it200_e0.1_Topk1_NCMC1__hsTD_eqw_MV_p025
9XRCE1821398937340813__combin2_Np10_div134
10XRCE1581399298540326__svmBoost_Mul_Power_f60_acc383
11Hubert Curien lab group1581399307327099__Our_Results
12Hubert Curien lab group1421399282760110__data_label
13Hubert Curien lab group1401399314094053__results
14Hubert Curien lab group1401399313194459__results
15Hubert Curien lab group1401399310150668__results
16Hubert Curien lab group1381399281473861__data_label
17Hubert Curien lab group1331399313519232__results
18Hubert Curien lab group1321399216516740__data_label
19Hubert Curien lab group771399311976079__results
20IDIAP451399899953697__result_hl2l_svm_c100



Download Test Data

Source Domain: (detail)
  1. Caltech
  2. ImageNet
  3. Pascal
  4. Bing
Target Domain: SUN
  1. Train
  2. Test
  3. Test Label



Submission instructions

The submissions will be received through the ImageCLEF 2014 system, going to "Runs" and then "Submit run" and select track "ImageCLEF2014:domain-adaptation".

Task specific submission instructions and runs format can be seen here.




The task

In this year edition only one task will be considered, where participants should be able to classify images in the target domain by leveraging labeled source data.

The data

Participants are provided with five dataset:

  1. Caltech-256, consists of 256 object categories with a collection of 30.607 images.
  2. ImageNet ILSVRC2012, is organized according to the WordNet hierarchy, with an average of 500 images per node.
  3. PASCAL VOC2012, is image data sets for object class recognition with 20 object classes.
  4. Bing, contains all 256 categories from Caltech-256 and is augmented with 300 web images per category that were collected through textual search using Bing.
  5. SUN, is scene understanding database that contains 899 categories and 130.519 images.
The organizers select 12 common classes from the data sets:
  1. aeroplane
  2. bike
  3. bird
  4. boat
  5. bottle
  6. bus
  7. car
  8. dog
  9. horse
  10. monitor
  11. motorbike
  12. people
The following figure illustrates the images for each class from the given dataset:


Source Domain

In this year challenge, participants are provided with four source dataset:

  1. Caltech
  2. ImageNet
  3. Pascal
  4. Bing
The organizers randomly select 50 images per class, with total 600 images for each source. Each formatted file consists of 600 extracted image features written in text file and a label file.

Target Domain

The organizers use SUN dataset as target data with 5 images per class for training data and 50 images per class for testing set.

  • Training
  • Test

Image Feature Extractors

Each image is represented with dense SIFT descriptors (PHOW features) at points on a regular grid with spacing 128 pixels. At each grid point the descriptors are computed over four patches with different radii, hence each point is represented by four SIFT descriptors. The dense features are vector quantized into 256 visual words using k-means clustering on a randomly chosen subset of the Caltech-256 database. Finally, all images are converted to 2x2 spatial histograms over the 256 visual words, resulted in 1024 feature dimension. The software for image extractor is available at www.vlfeat.org.

Reference:
  • Image Classification using Random Forests and Ferns, A. Bosch, A. Zisserman, X. Munoz, IEEE International Conference on Computer Vision, 2007.



Performance Evaluation

For each image, participants have to provide the class name. The result will be compared with the truth label.

  • For each correctly classified image will receive 1 point.
  • For each misclassified image will receive 0 point.

Performance Evaluation Script

A matlab script is provided for evaluating performance of the algorithms on the test dataset. The script has been tested under matlab (ver 8.1.0.64) and octave (ver 3.6.2):

Both software are available for Unix/Linux, Windows, and Mac OSX. Octave can be downloaded from http://www.gnu.org/software/octave/download.html. Participants are not required to have knowledge about matlab or octave to run the script. In order to run to script, it needs two files: label.txt and result.txt. 'label.txt' is a file containing the truth label of the test data. Participants need to provide 'result.txt' and can change the filename in the script. In the attachment file contains 3 files:
  • domainadapt.m
  • label.txt
  • exampleresult.txt

When using the script, the following codes should be used to represent a class name (case-sensitive):
  • aeroplane
  • bike
  • bird
  • boat
  • bottle
  • bus
  • car
  • dog
  • horse
  • monitor
  • motorbike
  • people
Each line in the result file should represent a classification for a single image, with the following format: <number> <class_name>.
Given that Matlab or Octave is already installed, running the script will produce the following note:
/===============================================\
       Domain Adaptation @ImageCLEF 2014
       Performance Evaluation Script
\===============================================/
Counting the score
Score per class:
1. 4
2. 2
3. 0
4. 1
5. 2
6. 4
7. 10
8. 5
9. 6
10. 6
11. 3
12. 5

Total score: 48
AttachmentSize
Package icon bing0414.zip1.05 MB
Package icon caltech0414.zip945.31 KB
Package icon imagenet0414.zip1.05 MB
Package icon pascal04014.zip1.08 MB
Package icon sun_train0414.zip85.96 KB
Package icon sun_test0414.zip864.99 KB