You are here

ImageCLEFlifelog 2020

Welcome to the 4th edition of the Lifelog Task!

Motivation

An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips for every moment of our lives are becoming available. Considering the huge volume of data created, there is a need for systems that can automatically analyse the data in order to categorize, summarize and also query to retrieve the information the user may need.

ImageCLEF Lifelog schedule

  • 10.02.2020: Registration opens for ImageCLEF Lifelog tasks
  • 17.02.2020: Development data released
  • 14.04.2020: Test data release starts
  • 15.05.2020: Deadline for submitting the participants runs
  • 18.05.2020: Release of the processed results by the task organizers
  • 30.05.2020: Deadline for submission of working notes papers by the participants
  • 15.06.2020: Notification of acceptance of the working notes papers
  • 29.06.2020: Camera ready working notes papers
  • 22-25.09.2020: CLEF 2020, Thessalonik, Greece

Task description

Dataset: New for 2020

The 4th edition of this task will come with new, enriched data, focused on daily living activities and the chronological order of the moments and a completely new task for assessing sport performance.

SubTask 1: lifelog moment retrieval (LMRT, 4th edition)

Lifelog Core Task: lifelog moment retrieval (LMRT, 4th edition). The participants are required to retrieve a number of specific predefined activities in a lifelogger's life. For example, they are asked to return the relevant moments for the query “Find the moment(s) when the lifelogger was having an icecream on the beach”. Particular attention should be paid to the diversification of the selected moments with respect to the target scenario. Data. A new rich multimodal dataset will be used (e.g., about 4.5 months in total of data from three lifeloggers, 1,500-2,500 images per day, visual concepts, semantic content, biometrics information, music listening history, computer usage).

SubTask 2: sport performance lifelog (SPLL, 1st edition)

New Lifelog Task: sport performance lifelog (SPLL, 1st edition). The participants are required to predict the expected performance (e.g., estimated finishing time, average heart rate and calorie consumption) for an athlete who trained for a sport event. Data. A new data set will be provided, e.g., information collected from 2-3 people that train for a 10km run, daily sleeping patterns, daily heart rate, sport activities, image logs of all food consumed during the training period, close up images of the eyes of the runner before and after training (time stamps December 2019 to April 2020).

There are 3 tasks for subtask 2:

Task 1: Predict whether the time of the 5km run has improved or not from first run to last run in end of March.

Task 2: Predict whether the weight has been reduced or not from from the first registration until end of March.

Task 3: Predict whether the weight has changed in February for the participants also taking pictures of all food and drinking (except water).

Evaluation Methodology

SubTask 1: lifelog moment retrieval (LMRT, 4th edition)

For assessing performance, classic metrics will be deployed. These metrics are:

  • Cluster Recall at X (CR@X) - a metric that assesses how many different clusters from the ground truth are represented among the top X results;
  • Precision at X (P@X) - measures the number of relevant photos among the top X results;
  • F1-measure at X (F1@X) - the harmonic mean of the previous two.

Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50. Official ranking metrics this year will be the F1-measure@10, which gives equal importance to diversity (via CR@10) and relevance (via P@10).

Participants are allowed to undertake the sub-tasks in an interactive or automatic manner. For interactive submissions, a maximum of five minutes of search time is allowed per topic. In particular, the organizers would like to emphasize methods that allow interaction with real users (via Relevance Feedback (RF), for example), i.e., beside of the best performance, the way of interaction (like number of iterations using RF), or innovation level of the method (for example, new way to interact with real users) are encouraged.

SubTask 2: sport performance lifelog (SPLL, 1st edition)

For the evaluation of the tasks the main ranking will be based on whether there is a correct positive or negative change (a point per correct) - and if there is a draw, the difference between the predicted and actual change will be evaluated and used to rank the task participants.

Participant registration

For LMRT task, please refer to the page.

For SPLL task, please refer to the page.

For general information of registration, please refer to the main page of ImageCLEF.

Submission instructions

SubTask 1: Lifelog moment retrieval (LMRT)

A submitted run for the LMRT task must be in the form of a csv file in the following format:

[topic id, image id, confidence score]

Where:

  • topic id: Number of the queried topic, e.g., from 1 to 10 for the development set.
  • image id: The image ID that answers the topic. Each image ID is mapped into moments. If there are more than one sequential images that answer the topic (i.e. the moment is more than one image in duration), then any image from within that moment is acceptable.
  • confidence score: from 0 to 1.

Sample:

1, u1_2015-02-26_095916_1, 1.00
1, u1_2015-02-26_095950_2, 1.00
1, u1_2015-02-26_100028_1, 1.00
...
10, u3_2015-08-01_144854_1, 1.00
10, u3_2015-08-01_145314_1, 1.00
10, u3_2015-08-01_145345_2, 1.00
10, u3_2015-08-01_145531_1, 0.80

SubTask 2: Sport Performance Lifelog (SPLL)

A submitted run for task 1 of SPLL subtask must be in the form of a csv file in the following format:

[user id, time difference]

Where: time difference has a preceding "+" if the time is slower and a "-" if it is faster

A submitted run for task 2 and 3 of SPLL subtask must be in the form of a csv file in the following format:

[user id, weight difference]

Where: weight difference has a preceding "+" if the time is slower and a "-" if it is faster

Submission files

The file name must be followed the rule <task abbreviation>_<team name without spaces>_<run name without spaces>.csv

Examples:

- LMRT_DCU_run1.csv

- SPLL_DCU_run1.csv

Topics and Ground Truth Release

SubTask 1: Lifelog moment retrieval (LMRT)

There are 10 dev topics for LMRT Tasks, like linked! The clusters and ground truth for these 10 dev topics are here!

*** Notice: the third column of ground truth is [topic id, image id, cluster id], which is different from the one of submission instruction [topic id, image id, confidence score]. The meaning of cluster id is to measure the diversity of the retrieved results for each topic. Participants should follow the submission instruction to generate the correct format of submission file.

SubTask 2: Sport Performance Lifelog (SPLL)

Recommended Reading

[1] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Liting Zhou, Mathias Lux, Tu-Khiem Le, Van-Tu Ninh and Cathal Gurrin. 2019. Overview of ImageCLEFlifelog 2019: Solve my life puzzle and Lifelog Moment Retrieval, CLEF2019 Working Notes (CEUR Workshop Proceedings), Lugano, Switzerland.

[2] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, "Proceedings of the 2019 ACM Workshop on The Lifelog Search Challenge", ICMR '19 International Conference on Multimedia Retrieval
Ottawa ON Canada - June 2019.

[3] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, and Cathal Gurrin, "Overview of ImageCLEFlifelog 2018: Daily Living Understanding and Lifelog Moment Retrieval", CLEF2018 Working Notes, Avignon, France, 2018.

[4] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, "Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge", ICMR '18 International Conference on Multimedia Retrieval
Yokohama, Japan - June 11 - 14, 2018.

[5] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Giulia Boato, Liting Zhou, Cathal Gurrin, "Overview of ImageCLEFlifelog 2017: Lifelog Retrieval and Summarization", CLEF2017 Working Notes, Dublin, Ireland, 2017, vol 1866.

[4] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Håvard Johansen, Hideo Joho, Vivek K Singh, "LTA 2016: The First Workshop on Lifelogging Tools and Applications", ACM Multimedia, Amsterdam, The Netherlands, 2016.

[6] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Duc Tien Dang Nguyen, Hideo Joho, "LTA 2017: The Second Workshop on Lifelogging Tools and Applications", ACM Multimedia, Mountain View, CA USA, 2017.

[7] Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, "Overview of NTCIR-12 Lifelog Task", Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, Tokyo, Japan, 2016.

[8] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "Multimodal Retrieval with Diversification and Relevance Feedback for Tourist Attraction Images", ACM Transactions on Multimedia Computing, Communications, and Applications, vol 13, n° 4, 2017.

[9] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "A hybrid approach for retrieving diverse social images of landmarks", IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015.

[10] Working notes of the 2015 MediaEval Retrieving Diverse Social Images task, CEUR-WS.org, Vol. 1436, ISSN: 1613-0073.

[11] B. Ionescu, A.L. Gînscă, B. Boteanu, M. Lupu, A. Popescu,H. Müller, Div150Multi: A Social Image Retrieval Result Diversification Dataset with Multi-topic Queries”, ACM MMSys, Klagenfurt, Austria, 2016.

Helpful tools and resources

Eyeaware lifelogging framework;

OpenCV – Open Source Computer Vision;

LIRE: Lucence Image Retrieval;

trec_eval scoring software;

ImageCLEF - Image Retrieval in CLEF;

Weka Data Mining Software;

Nvidia DIGITS;

Caffee deep learning framework;

Creative Commons.

Organizers