You are here


Welcome to the 3rd edition of the Lifelog Task!


An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips for every moment of our lives are becoming available. Considering the huge volume of data created, there is a need for systems that can automatically analyse the data in order to categorize, summarize and also query to retrieve the information the user may need.

Despite the increasing number of successful related workshops and panels ( JCDL 2015 , iConf 2016 , ACM MM 2016 , ACM MM 2017 , ICMR 2018 ) lifelogging has seldom been the subject of a rigorous comparative benchmarking exercise as, for example, the new lifelog evaluation task at NTCIR-13 or the last editions of the ImageCLEFlifelog task. In this edition of this task we aim to bring the attention of lifelogging to an as wide as possible audience and to promote research into some of the key challenges of the coming years.


  • 26.11.2018: Development dataset released here.
  • 07.02.2019: Development queries released and task descriptions revised!
  • 08.02.2019: Development PUZZLE dataset and LMRT ground truth are released!
  • 21.03.2019: Test data for PUZZLE and new topics for LMRT are released on 29.03.2019
  • 04.04.2019: Development LMRT groundtruth and clusters have been revised and updated!!! Please check the updated one!!!
  • 05.04.2019: Ten new test topics have also been revised and updated!
  • 18.04.2019: The evaluation system is ready to use in CrowAI!
  • 25.04.2019: Notification of the meaning of released LMRT ground truth format
  • 29.04.2019: Deadline of CrowAI submission has been postponed to May 15th at 12:00 UTC
  • 22.05.2019: Deadline of working notes submission has been postponed to May 28th at 12:00 UTC (firm)


  • 05.11.2018: Registration opens
  • 26.11.2018: Development data release
  • 18.03.2019: Test data release
  • 26.04.2019: Registration closes
  • 15.05.2019 (firm) 01.05.2019: Deadline for submission of runs by the participants 11:59:59 PM GMT.
  • 17.05.2019: Release of processed results by the task organizers.
  • 28.05.2019 (firm) 24.05.2019: Deadline for submission of working notes papers by the participants
  • 14.06.2019: Notification of acceptance of the working notes papers.
  • 28.06.2019: Camera ready working notes papers.
  • 09.-12.09.2019: CLEF 2019, Lugano, Switzerland

Task description


The task will be split into two related subtasks using a completely new rich multimodal dataset which consists of 29 days of data from one lifelogger, namely: images (1,500-2,500 per day from wearable cameras), visual concepts (automatically extracted visual concepts with varying rates of accuracy), semantic content (semantic locations, semantic activities) based on sensor readings (via the Moves App) on mobile devices, biometrics information (heart rate, galvanic skin response, calorie burn, steps, continual blood glucose, etc.), music listening history, computer usage (frequency of typed words via the keyboard and information consumed on the computer via ASR of on-screen activity on a per-minute basis). Detailed description about the meta data is here!

SubTask 1:Solve my life puzzle (Puzzle)

Solve my life puzzle. Given a set of lifelog images with associated metadata (e.g., biometrics, location, etc.), but no timestamps, the participants need to analyse these images and rearrange them in chronological order and predict the correct day (Monday or Sunday) and part of the day (morning, afternoon, or evening). The dataset will be arranged in 75% training and 25% test data.

SubTask 2: Lifelog moment retrieval (LMRT)

Lifelog moment retrieval. This sub-task follows the success of the LMRT sub-task in ImageCLEFlifelog2018 with some minor adjustments. The participants have to retrieve a number of specific predefined activities in a lifelogger's life. For example, they should return the relevant moments for the query “Find the moment(s) when I was shopping”. Particular attention should be paid to the diversification of the selected moments with respect to the target scenario. The ground truth for this subtask was created using manual annotation.

Submission instructions

The submissions will be received through the ImageCLEF 2019 system. Go to "Runs", then "Submit run", and then select the track.

Participants will be permitted to submit up to 10 runs.

Each system run will consist of a single ASCII plain text file.

The results of each run should be given in separate lines in the text file.

SubTask 1: Puzzle

Participants will be permitted to submit up to 10 runs.

Each system run will consist of a single ASCII plain text file.

The results of each run should be given in separate lines in the text file.

A submitted run for the Puzzle task must be in the form of a text file in the following format:

[query_id, image id, order, part of the day]


  • query id: The id of the query that participant want to answer.
  • image id: Each image id is mapped to a specific order and should be predicted which part of the day that the image belongs to.
  • order: The answer of the task after the moments are arranged. The index starts from 1.
  • part of the day: it should be morning, afternoon, evening, or night. The answer must be the index of the part of the day as follows: morning - 1 (4h00 AM to 11h59 AM), afternoon - 2 (12h00 PM to 4h59 PM) , evening - 3 (5h00 PM to 10h59 PM), night - 4 (11h00 PM to 3h59 AM).

Note: Please sort the results by the image id.


001, 001.JPG, 5, 2
001, 002.JPG, 1, 1
010, 025.JPG, 23, 3

SubTask 2: Lifelog moment retrieval (LMRT)

A submitted run for the LMRT task must be in the form of a text file in the following format:

[topic id, image id, confidence score]


  • topic id: Number of the queried topic, e.g., from 1 to 10 for the development set.
  • image id: The image ID that answers the topic. Each image ID is mapped into moments. If there are more than one sequential images that answer the topic (i.e. the moment is more than one image in duration), then any image from within that moment is acceptable.
  • confidence score: from 0 to 1.


1, u1_2015-02-26_095916_1, 1.00
1, u1_2015-02-26_095950_2, 1.00
1, u1_2015-02-26_100028_1, 1.00
10, u3_2015-08-01_144854_1, 1.00
10, u3_2015-08-01_145314_1, 1.00
10, u3_2015-08-01_145345_2, 1.00
10, u3_2015-08-01_145531_1, 0.80

Submission files

The file name must be followed the rule <task abbreviation>_<team name without spaces>_<run name without spaces>.csv


- PUZZLE_DCU_run1.csv

- LMRT_DCU_run1.csv

Evaluation Methodology

For each subtask, the final score is computed as an arithmetic mean of all queries. For each query, the evaluation method is applied as follows:

SubTask 1: Solve my life puzzle (Puzzle)

We use Kendall rank correlation coefficient to evaluate the similarity between the participant's arrangement and the ground truth. For more information, please refer to this link.

The final score of this task is calculated by the mean of the accuracy of the prediction of which part of the day the image belongs to and the Kendall's Tau coefficient.

SubTask 2: Lifelog moment retrieval (LMRT)

For assessing performance, classic metrics will be deployed. These metrics are:

  • Cluster Recall at X (CR@X) - a metric that assesses how many different clusters from the ground truth are represented among the top X results;
  • Precision at X (P@X) - measures the number of relevant photos among the top X results;
  • F1-measure at X (F1@X) - the harmonic mean of the previous two.

Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50. Official ranking metrics this year will be the F1-measure@10, which gives equal importance to diversity (via CR@10) and relevance (via P@10).

Participants are allowed to undertake the sub-tasks in an interactive or automatic manner. For interactive submissions, a maximum of five minutes of search time is allowed per topic. In particular, the organizers would like to emphasize methods that allow interaction with real users (via Relevance Feedback (RF), for example), i.e., beside of the best performance, the way of interaction (like number of iterations using RF), or innovation level of the method (for example, new way to interact with real users) are encouraged.

Submitting a working notes paper to CLEF

Upon the completion of the task, participating teams are expected to present their systems in a working note paper, regardless their results. You should keep in mind that the main goal of the lab is not to win the benchmark but compare techniques based on the same data, so everyone can learn from the results. Authors are invited to submit using the LNCS proceedings format.

The CLEF 2019 working notes will be published in the proceedings, facilitating the indexing by DBLP. According to the CEUR-WS policies, a light review of the working notes will be conducted by the task organizers to ensure quality.

Working notes will have to be submitted before 24th May 2019 11:59 pm - midnight - Central European Summer Time, through the EasyChair submission system. The working notes papers are technical reports written in English and describing the participating systems and the conducted experiments. To avoid redundancy, the papers should *not* include a detailed description of the actual task, data set and experimentation protocol. Instead of this, the papers are required to cite both the general ImageCLEF overview paper and the corresponding lifelog task overview paper, and to present the official results returned by the organizers. Bibtex references will be available soon. A general structure for the paper should provide at a minimum the following information:

  1. Title
  2. Authors
  3. Affiliations
  4. Email addresses of all authors
  5. The body of the text. This should contain information on:
    • tasks performed
    • main objectives of experiments
    • approach(es) used and progress beyond state-of-the-art
    • resources employed
    • results obtained
    • analysis of the results
    • perspectives for future work

The paper should not exceed 12 pages, and further instructions on how to write and submit your working notes will be available soon on this page :

Topics and Ground Truth Release

SubTask 1:Solve my life puzzle (Puzzle)

The data for Puzzle task is here! The ground truth for the development queries can be downloaded here

SubTask 2: Lifelog moment retrieval (LMRT)

There are 10 dev topics for LMRT Tasks, like linked! The clusters and ground truth for these 10 dev topics are here!

*** Notice: the third column of ground truth is [topic id, image id, cluster id], which is different from the one of submission instruction [topic id, image id, confidence score]. The meaning of cluster id is to measure the diversity of the retrieved results for each topic. Participants should follow the submission instruction to generate the correct format of submission file.
New 10 test topics for LMRT are revised and upgraded!

Recommended Reading

[1] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, and Cathal Gurrin, "Overview of ImageCLEFlifelog 2018: Daily Living Understanding and Lifelog Moment Retrieval", CLEF2018 Working Notes, Avignon, France, 2018.

[2] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, "Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge", ICMR '18 International Conference on Multimedia Retrieval
Yokohama, Japan - June 11 - 14, 2018.

[3] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Giulia Boato, Liting Zhou, Cathal Gurrin, "Overview of ImageCLEFlifelog 2017: Lifelog Retrieval and Summarization", CLEF2017 Working Notes, Dublin, Ireland, 2017, vol 1866.

[4] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Håvard Johansen, Hideo Joho, Vivek K Singh, "LTA 2016: The First Workshop on Lifelogging Tools and Applications", ACM Multimedia, Amsterdam, The Netherlands, 2016.

[5] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Duc Tien Dang Nguyen, Hideo Joho, "LTA 2017: The Second Workshop on Lifelogging Tools and Applications", ACM Multimedia, Mountain View, CA USA, 2017.

[6] Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, "Overview of NTCIR-12 Lifelog Task", Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, Tokyo, Japan, 2016.

[7] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "Multimodal Retrieval with Diversification and Relevance Feedback for Tourist Attraction Images", ACM Transactions on Multimedia Computing, Communications, and Applications, vol 13, n° 4, 2017.

[8] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "A hybrid approach for retrieving diverse social images of landmarks", IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015.

[9] Working notes of the 2015 MediaEval Retrieving Diverse Social Images task,, Vol. 1436, ISSN: 1613-0073.

[10] B. Ionescu, A.L. Gînscă, B. Boteanu, M. Lupu, A. Popescu,H. Müller, Div150Multi: A Social Image Retrieval Result Diversification Dataset with Multi-topic Queries”, ACM MMSys, Klagenfurt, Austria, 2016.

Helpful tools and resources

Eyeaware lifelogging framework;

OpenCV – Open Source Computer Vision;

LIRE: Lucence Image Retrieval;

trec_eval scoring software;

ImageCLEF - Image Retrieval in CLEF;

Weka Data Mining Software;

Nvidia DIGITS;

Caffee deep learning framework;

Creative Commons.


  • When referring to the ImageCLEFlifelog 2019 task general goals, general results, etc. please cite the following publication:
    • Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Liting Zhou, Mathias Lux, Tu-Khiem Le, Van-Tu Ninh and Cathal Gurrin. 2019. Overview of ImageCLEFlifelog 2019: Solve my life puzzle and Lifelog Moment Retrieval. In CLEF2019 Working Notes (CEUR Workshop Proceedings). <>, Lugano, Switzerland.
    • BibTex:

        author = {Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Minh-Triet Tran and Liting Zhou and Mathias Lux and Tu-Khiem Le and Van-Tu Ninh and Cathal Gurrin},
        title = {{Overview of ImageCLEFlifelog 2019: Solve my life puzzle and Lifelog Moment Retrieval}},
        booktitle = {CLEF2019 Working Notes},
        series = {{CEUR} Workshop Proceedings},
        year = {2019},
        volume = {},
        publisher = { $<$$>$},
        pages = {},
        month = {September 09-12},
        address = {Lugano, Switzerland},
  • When referring to the ImageCLEF 2019 lab general goals, general results, etc. please cite the following publication which will be published by September 2019 (also referred to as ImageCLEF general overview):
    • Bogdan Ionescu, Henning Müller, Renaud Péteri, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Dzmitri Klimuk, Aleh Tarasau, Asma Ben Abacha, Sadid A. Hasan, Vivek Datla, Joey Liu, Dina Demner-Fushman, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Minh-Triet Tran, Mathias Lux, Cathal Gurrin, Obioma Pelka, Christoph M. Friedrich, Alba García Seco de Herrera, Narciso Garcia, Ergina Kavallieratou, Carlos Roberto del Blanco, Carlos Cuevas Rodríguez, Nikos Vasillopoulos, Konstantinos Karampidis, Jon Chamberlain, Adrian Clark, Antonio Campello, ImageCLEF 2019: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019), Lugano, Switzerland, LNCS Lecture Notes in Computer Science, Springer (September 9-12 2019)
    • BibTex:
        author = {Bogdan Ionescu and Henning M\"uller and Renaud P\'{e}teri and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Dzmitri Klimuk and Aleh Tarasau and Asma Ben Abacha and Sadid A. Hasan and Vivek Datla and Joey Liu and Dina Demner-Fushman and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Minh-Triet Tran and Mathias Lux and Cathal Gurrin and Obioma Pelka and Christoph M. Friedrich and Alba Garc\'ia Seco de Herrera and Narciso Garcia and Ergina Kavallieratou and Carlos Roberto del Blanco and Carlos Cuevas Rodr\'{i}guez and Nikos Vasillopoulos and Konstantinos Karampidis and Jon Chamberlain and Adrian Clark and Antonio Campello},
        title = {{ImageCLEF 2019}: Multimedia Retrieval in Medicine, Lifelogging, Security and Nature},
        booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
        series = {Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019)},
        year = {2019},
        volume = {},
        publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
        pages = {},
        month = {September 9-12},
        address = {Lugano, Switzerland}


  • Duc-Tien Dang-Nguyen <ductien.dangnguyen(at)>, University of Bergen, Norway
  • Luca Piras <luca.piras(at)>, Pluribus One & University of Cagliari, Cagliari, Italy
  • Michael Riegler <michael(at)>, University of Oslo, Norway
  • Minh-Triet Tran <tmtriet(at)>, University of Science, Ho Chi Minh City, Vietnam
  • Mathias Lux <mlux(at)>, Klagenfurt University, Austria
  • Cathal Gurrin <cgurrin(at)>, Dublin City University, Ireland