You are here

ImageCLEFlifelog 2020

Welcome to the 4th edition of the Lifelog Task!

Motivation

An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips for every moment of our lives are becoming available. Considering the huge volume of data created, there is a need for systems that can automatically analyse the data in order to categorize, summarize and also query to retrieve the information the user may need.

ImageCLEF Lifelog schedule

  • 10.02.2020: Registration opens for ImageCLEF Lifelog tasks
  • 17.02.2020: Development data released
  • 30.04.2020: SPLL test data and train ground-truth are released
  • 11.05.2020: LMRT test data releases
  • 05.06.2020: Deadline for submitting the participants runs
  • 12.06.2020: Release of the processed results by the task organizers
  • 10.07.2020: Deadline for submission of working notes papers by the participants
  • 07.08.2020: Notification of acceptance of the working notes papers
  • 21.08.2020: Camera ready working notes papers
  • 22-25.09.2020: CLEF 2020, Thessalonik, Greece

The schedule was updated last time on 30.04.2020.

Task description

Dataset: New for 2020

The 4th edition of this task will come with new, enriched data, focused on daily living activities and the chronological order of the moments and a completely new task for assessing sport performance.

SubTask 1: lifelog moment retrieval (LMRT, 4th edition)

Lifelog Core Task: lifelog moment retrieval (LMRT, 4th edition). The participants are required to retrieve a number of specific predefined activities in a lifelogger's life. For example, they are asked to return the relevant moments for the query “Find the moment(s) when the lifelogger was having an icecream on the beach”. Particular attention should be paid to the diversification of the selected moments with respect to the target scenario. Data. A new rich multimodal dataset will be used (e.g., about 4.5 months in total of data from three lifeloggers, 1,500-2,500 images per day, visual concepts, semantic content, biometrics information, music listening history, computer usage).

SubTask 2: sport performance lifelog (SPLL, 1st edition)

New Lifelog Task: sport performance lifelog (SPLL, 1st edition). The participants are required to predict the expected performance (e.g., estimated finishing time, average heart rate and calorie consumption) for an athlete who trained for a sport event. Data. A new data set will be provided, e.g., information collected from 16 people that train for a 5km run, daily sleeping patterns, daily heart rate, sport activities, image logs of all food consumed during the training period.

There are 3 tasks for subtask 2:

Task 1: Predict the change in running speed given by the change in seconds used per km (kilometer speed) from the initial run to the run at the end of the reporting period.

The valid user ids for the train set are p01, p10, p11, p13, p16 and the valid user_ids for the test set as well as in the submission file is p03, p04, p06, p07, p14.

Task 2: Predict the change in weight since the beginning of the reporting period to the end of the reporting period in kilos (1 decimal).

The valid user ids for the train set are p01, p10, p11, p13, p16 and the valid user_ids for the test set as well as in the submission file is p02, p03, p04, p05, p06, p07, p08, p09, p12, p14.

Task 3: Predict the change in weight from the beginning of February to the end of the reporting period in kilos (1 decimal) using the images.

The valid user ids for the train set is p01 and the valid user_ids for the test set as well as in the submission file is p03, p05.
.

Evaluation Methodology

SubTask 1: lifelog moment retrieval (LMRT, 4th edition)

For assessing performance, classic metrics will be deployed. These metrics are:

  • Cluster Recall at X (CR@X) - a metric that assesses how many different clusters from the ground truth are represented among the top X results;
  • Precision at X (P@X) - measures the number of relevant photos among the top X results;
  • F1-measure at X (F1@X) - the harmonic mean of the previous two.

Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50. Official ranking metrics this year will be the F1-measure@10, which gives equal importance to diversity (via CR@10) and relevance (via P@10).

Participants are allowed to undertake the sub-tasks in an interactive or automatic manner. For interactive submissions, a maximum of five minutes of search time is allowed per topic. In particular, the organizers would like to emphasize methods that allow interaction with real users (via Relevance Feedback (RF), for example), i.e., beside of the best performance, the way of interaction (like number of iterations using RF), or innovation level of the method (for example, new way to interact with real users) are encouraged.

SubTask 2: sport performance lifelog (SPLL, 1st edition)

For the evaluation of the tasks the main ranking will be based on whether there is a correct positive or negative change (a point per correct) - and if there is a draw, the difference between the predicted and actual change will be evaluated and used to rank the task participants.

Participant registration

For LMRT task, please refer to the page.

For SPLL task, please refer to the page.

For general information of registration, please refer to the main page of ImageCLEF.

Submission instructions

SubTask 1: Lifelog moment retrieval (LMRT)

A submitted run for the LMRT task must be in the form of a csv file in the following format:

[topic id, image id, confidence score]

Where:

  • topic id: Number of the queried topic, e.g., from 1 to 10 for the development set.
  • image id: The image ID that answers the topic. Each image ID is mapped into moments. If there are more than one sequential images that answer the topic (i.e. the moment is more than one image in duration), then any image from within that moment is acceptable.
  • confidence score: from 0 to 1.

Sample:

1, u1_2015-02-26_095916_1, 1.00
1, u1_2015-02-26_095950_2, 1.00
1, u1_2015-02-26_100028_1, 1.00
...
10, u3_2015-08-01_144854_1, 1.00
10, u3_2015-08-01_145314_1, 1.00
10, u3_2015-08-01_145345_2, 1.00
10, u3_2015-08-01_145531_1, 0.80

SubTask 2: Sport Performance Lifelog (SPLL)

A submitted run for task 1 of SPLL subtask must be in the form of a csv file in the following format:

[user id, time difference]

Where: time difference has a preceding "+" if the time is slower and a "-" if it is faster

A submitted run for task 2 and 3 of SPLL subtask must be in the form of a csv file in the following format:

[user id, weight difference]

Where: weight difference has a preceding "+" if the time is slower and a "-" if it is faster

A submitted run for the SPLL sub-task must be in the form of a text file in the following format:

[task id, user id, difference]

Where:

  • task id: The id of the task in SPLL sub-task (1, 2, or 3).
  • user id: The id of the user in the dataset.
  • difference: The meaning of this field depends on the task id. For task 1, its meaning is time difference. For task 2 and 3, its meaning is weight difference. The format of the difference is described in 1. and 2. in this section.

Submission files

The file name must be followed the rule <task abbreviation>_<team name without spaces>_<run name without spaces>.csv

Examples:

- LMRT_DCU_run1.csv

- SPLL_DCU_run1.csv

Topics and Ground Truth Release

SubTask 1: Lifelog moment retrieval (LMRT)

There are 10 dev topics for LMRT Tasks, like linked! The clusters and ground truth for these 10 dev topics are here!

10 test topics for LMRT Tasks are here!

*** Notice: the third column of ground truth is [topic id, image id, cluster id], which is different from the one of submission instruction [topic id, image id, confidence score]. The meaning of cluster id is to measure the diversity of the retrieved results for each topic. Participants should follow the submission instruction to generate the correct format of submission file.

SubTask 2: Sport Performance Lifelog (SPLL)

The ground-truth for train set of SPLL task is here.

Recommended Reading

[1] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Liting Zhou, Mathias Lux, Tu-Khiem Le, Van-Tu Ninh and Cathal Gurrin. 2019. Overview of ImageCLEFlifelog 2019: Solve my life puzzle and Lifelog Moment Retrieval, CLEF2019 Working Notes (CEUR Workshop Proceedings), Lugano, Switzerland.

[2] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, "Proceedings of the 2019 ACM Workshop on The Lifelog Search Challenge", ICMR '19 International Conference on Multimedia Retrieval
Ottawa ON Canada - June 2019.

[3] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, and Cathal Gurrin, "Overview of ImageCLEFlifelog 2018: Daily Living Understanding and Lifelog Moment Retrieval", CLEF2018 Working Notes, Avignon, France, 2018.

[4] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, "Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge", ICMR '18 International Conference on Multimedia Retrieval
Yokohama, Japan - June 11 - 14, 2018.

[5] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Giulia Boato, Liting Zhou, Cathal Gurrin, "Overview of ImageCLEFlifelog 2017: Lifelog Retrieval and Summarization", CLEF2017 Working Notes, Dublin, Ireland, 2017, vol 1866.

[4] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Håvard Johansen, Hideo Joho, Vivek K Singh, "LTA 2016: The First Workshop on Lifelogging Tools and Applications", ACM Multimedia, Amsterdam, The Netherlands, 2016.

[6] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Duc Tien Dang Nguyen, Hideo Joho, "LTA 2017: The Second Workshop on Lifelogging Tools and Applications", ACM Multimedia, Mountain View, CA USA, 2017.

[7] Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, "Overview of NTCIR-12 Lifelog Task", Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, Tokyo, Japan, 2016.

[8] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "Multimodal Retrieval with Diversification and Relevance Feedback for Tourist Attraction Images", ACM Transactions on Multimedia Computing, Communications, and Applications, vol 13, n° 4, 2017.

[9] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "A hybrid approach for retrieving diverse social images of landmarks", IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015.

[10] Working notes of the 2015 MediaEval Retrieving Diverse Social Images task, CEUR-WS.org, Vol. 1436, ISSN: 1613-0073.

[11] B. Ionescu, A.L. Gînscă, B. Boteanu, M. Lupu, A. Popescu,H. Müller, Div150Multi: A Social Image Retrieval Result Diversification Dataset with Multi-topic Queries”, ACM MMSys, Klagenfurt, Austria, 2016.

Helpful tools and resources

Eyeaware lifelogging framework;

OpenCV – Open Source Computer Vision;

LIRE: Lucence Image Retrieval;

trec_eval scoring software;

ImageCLEF - Image Retrieval in CLEF;

Weka Data Mining Software;

Nvidia DIGITS;

Caffee deep learning framework;

Creative Commons.

Citations

  • When referring to the ImageCLEFlifelog 2020 task general goals, general results, etc. please cite the following publication:
    • Van-Tu Ninh, Tu-Khiem Le, Liting Zhou, Luca Piras, Michael Riegler, Pål Halvorsen, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Duc-Tien Dang-Nguyen. 2020. Overview of ImageCLEFlifelog 2020: Lifelog Moment Retrieval and Sport Performance Lifelog. In CLEF2020 Working Notes (CEUR Workshop Proceedings). CEUR-WS.org <http://ceur-ws.org>, Thessaloniki, Greece.
    • BibTex:

      @Inproceedings{LifeLogTask20_CLEF,
        author = {Van-Tu Ninh and Tu-Khiem Le and Liting Zhou and Luca Piras and Michael Riegler and P\aa l Halvorsen and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Duc-Tien Dang-Nguyen},
        title = {{Overview of ImageCLEF Lifelog 2020:Lifelog Moment Retrieval and Sport Performance Lifelog}},
        booktitle = {CLEF2020 Working Notes},
        series = {{CEUR} Workshop Proceedings},
        year = {2020},
        volume = {},
        publisher = {CEUR-WS.org $<$http://ceur-ws.org$>$},
        pages = {},
        month = {September 22-25},
        address = {Thessaloniki, Greece},
        }
  • When referring to the ImageCLEF 2020 lab general goals, general results, etc. please cite the following publication which will be published by September 2020 (also referred to as ImageCLEF general overview):
    • Bogdan Ionescu, Henning Müller, Renaud Péteri, Asma Ben Abacha, Vivek Datla, Sadid A. Hasan, Dina Demner-Fushman, Serge Kozlovski, Vitali Liauchuk, Yashin Dicente Cid, Vassili Kovalev, Obioma Pelka, Christoph M. Friedrich, Alba García Seco de Herrera, Van-Tu Ninh, Tu-Khiem Le, Liting Zhou, Luca Piras, Michael Riegler, Pål Halvorsen, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Duc-Tien Dang-Nguyen, Jon Chamberlain, Adrian Clark, Antonio Campello, Dimitri Fichou, Raul Berari, Paul Brie, Mihai Dogariu, Liviu Daniel Ștefan, Mihai Gabriel Constantin, Overview of the ImageCLEF 2020: Multimedia Retrieval in Medical, Lifelogging, Nature, and Internet Applications In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 11th International Conference of the CLEF Association (CLEF 2020), Thessaloniki, Greece, LNCS Lecture Notes in Computer Science, 12260, Springer (September 22-25, 2020).
    • BibTex:
      @inproceedings{ImageCLEF20,
        author = {Bogdan Ionescu and Henning M\"uller and Renaud P\'{e}teri and Asma Ben Abacha and Vivek Datla and Sadid A. Hasan and Dina Demner-Fushman and Serge Kozlovski and Vitali Liauchuk and Yashin Dicente Cid and Vassili Kovalev and Obioma Pelka and Christoph M. Friedrich and Alba Garc\'{\i}a Seco de Herrera and Van-Tu Ninh and Tu-Khiem Le and Liting Zhou and Luca Piras and Michael Riegler and P\aa l Halvorsen and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Duc-Tien Dang-Nguyen and Jon Chamberlain and Adrian Clark and Antonio Campello and Dimitri Fichou and Raul Berari and Paul Brie and Mihai Dogariu and Liviu Daniel \c{S}tefan and Mihai Gabriel Constantin},
        title = {{Overview of the ImageCLEF 2020}: Multimedia Retrieval in Medical, Lifelogging, Nature, and Internet Applications},
        booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
        series = {Proceedings of the 11th International Conference of the CLEF Association (CLEF 2020)},
        year = {2020},
        volume = {12260},
        publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
        pages = {},
        month = {September 22-25},
        address = {Thessaloniki, Greece}
        }

Organizers