You are here


[This page is being updated...]

Welcome to the 2nd edition of the Lifelog Task!


An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips in every moment of our life are becoming available. Considering the huge volume of data created, commonly referred to as lifelogs, there is a need for systems that can automatically analyse the data in order to categorize, summarize and also query to retrieve the information the user may need.

Despite the increasing number of successful related workshops and panels ( JCDL 2015 , iConf 2016 , ACM MM 2016 , ACM MM 2017 ) lifelogging has seldom been the subject of a rigorous comparative benchmarking exercise as, for example, the new lifelog evaluation task at NTCIR-13 or the last year edition of the ImageCLEFlifelog task. In this edition of this task we aim to bring the attention of lifelogging to an as wide as possible audience and to promote research into some of the key challenges of the coming years.


  • 20.11.2017: Development dataset released.
  • 20.03.2018: Test data released.
  • 27.04.2018: Registration closed
  • 06.05.2018: Deadline for submission of runs extended to 13.05.2018 11:59:59 PM GMT.


  • 08.11.2017: Registration opens
  • 20.11.2017: Development data release (after having registered, please send an email to us with your team name to get access to the data; please see more information here).
  • 20.03.2018: Test data release
  • 27.04.2018: Registration closes
  • 13.05.2018 (firm) 06.05.2017: Deadline for submission of runs by the participants 11:59:59 PM GMT.
  • 21.05.2018 18.05.2017: Release of processed results by the task organizers.
  • 31.05.2018: Deadline for submission of working notes papers by the participants
  • 15.06.2018: Notification of acceptance of the working notes papers.
  • 29.06.2018: Camera ready working notes papers.
  • 10.-14.09.2018: CLEF 2018, Avignon, France

Subtasks Overview


The task will be split into two related subtasks using a completely new multimodal dataset which consists of 90 days of data from two lifeloggers, namely: images (1,500-2,500 per day from wearable cameras), visual concepts (automatically extracted visual concepts with varying rates of accuracy), semantic content (semantic locations, semantic activities) based on sensor readings (via the Moves App) on mobile devices, biometrics information (heart rate, galvanic skin response, calorie burn, steps, etc.), music listening history, computer usage (frequency of typed words via the keyboard and information consumed on the computer via ASR of on-screen activity on a per-minute basis). The dataset is based on the data available for the NTCIR-13 - Lifelog 2 task. A lifelog baseline search engine API will be provided by the task organizers.

SubTask 1: Activities of Daily Living understanding (ADLT)

Given a period of time, e.g., "From 13 August to 16 August" or "Every Saturday", the participants should analyse the lifelog data and provide a summarisation based on the selected concepts (provided by the task organizers) of Activities of Daily Living (ADL) and the environmental settings / contexts in which these activities take place.

Some examples of ADL concepts: "Commuting (to work or another common venue)", "Travelling (to a destination other than work, home or another common social event)", "Preparing meals (include making tea or coffee)", "Eating/drinking", and contexts: "In an office environment", "In a home", "In an open space". The summarisation should be described as the frequency and spending time for ADL concepts and total time for contexts concepts. For example:

  • ADL: “Eating/drinking: 6 times, 90 minutes”, “Travelling: 1 time, 60 minutes”;
  • Context: “In an office environment: 500 minutes”, “In a church: 30 minutes”.

SubTask 2: Lifelog moment retrieval (LMRT)

The participants have to retrieve a number of specific moments in a lifelogger's life. We define moments as semantic events, or activities that happened throughout the day. For example, they should return the relevant moments for the query “Find the moment(s) when I was shopping for wine in the supermarket.” Particular attention should be paid to the diversification of the selected moments with respect to the target scenario.

The ground truth for this subtask was created using manual annotation.

Registering for the task and accessing the data

Please refer to the general ImageCLEF registration instructions

Submission instructions

The submissions will be received through the ImageCLEF 2018 system. Go to "Runs", then "Submit run", and then select the track.

Participants will be permitted to submit up to 10 runs.

Each system run will consist of a single ASCII plain text file.

The results of each run should be given in separate lines in the text file.

Submission files

The file name must be followed the rule <task abbreviation>_<team name without spaces>_<run name without spaces>.csv


- LRT_DCU_run1.csv

- LST_DCU_run1.csv

Evaluation Methodology

For each subtask, the final score is computed as an arithmetic mean of all queries. For each query, the evaluation method is applied as follows:

SubTask 1: Activities of Daily Living understanding

Evaluation metrics based on NDCG (Normalized Discounted Cumulative Gain) at different depths are used, i.e., NDCG@N, where N will vary based on the type of the topics, for the recall oriented topics N will be larger (>20), and for the precision oriented topics N will be smaller N (5 or 10 or 20).

SubTask 2: Lifelog moment retrieval

For assessing performance, classic metrics will be deployed. These metrics are:

  • Cluster Recall at X (CR@X) - a metric that assesses how many different clusters from the ground truth are represented among the top X results;
  • Precision at X (P@X) - measures the number of relevant photos among the top X results;
  • F1-measure at X (F1@X) - the harmonic mean of the previous two.

Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50. Official ranking metrics this year will be the F1-measure@10, which gives equal importance to diversity (via CR@10) and relevance (via P@10).

Participants are allowed to undertake the sub-tasks in an interactive or automatic manner. For interactive submissions, a maximum of five minutes of search time is allowed per topic. In particular, the organizers would like to emphasize methods that allow interaction with real users (via Relevance Feedback (RF), for example), i.e., beside of the best performance, the way of interaction (like number of iterations using RF), or innovation level of the method (for example, new way to interact with real users) are encouraged.


Results for all sub tasks were sent to the teams. We report here only the results with the highest score.

Subtask #1: Activities of Daily Living Understanding

Activities of Daily Living Understanding
Group Name Percentage Dissimilarity Rank Dissimilarity
DCU* 0.816 0
CIE@UTB 0.556 1
NLP-lab 0.479 2
HCMUS 0.059 3

Subtask #2: Lifelog Moment Retrieval

Lifelog Moment Retrieval
Group Name F1@10 Rank F1@10
AILabGTi 0.545 1
HCMUS 0.479 2
Regim_Lab 0.424 3
NLP-lab 0.395 4
CAMPUS-UPB 0.216 5
DCU* 0.131 0

* NOTE: These runs are not ranked since they are from the organizers team.

Submitting a working notes paper to CLEF

Upon the completion of the task, participating teams are expected to present their systems in a working note paper, regardless their results. You should keep in mind that the main goal of the lab is not to win the benchmark but compare techniques based on the same data, so everyone can learn from the results. Authors are invited to submit using the LNCS proceedings format.

The CLEF 2018 working notes will be published in the proceedings, facilitating the indexing by DBLP. According to the CEUR-WS policies, a light review of the working notes will be conducted by the task organizers to ensure quality.

Working notes will have to be submitted before 31st May 2018 11:59 pm - midnight - Central European Summer Time, through the EasyChair submission system. The working notes papers are technical reports written in English and describing the participating systems and the conducted experiments. To avoid redundancy, the papers should *not* include a detailed description of the actual task, data set and experimentation protocol. Instead of this, the papers are required to cite both the general ImageCLEF overview paper and the corresponding lifelog task overview paper, and to present the official results returned by the organizers. Bibtex references will be available soon. A general structure for the paper should provide at a minimum the following information:

  1. Title
  2. Authors
  3. Affiliations
  4. Email addresses of all authors
  5. The body of the text. This should contain information on:
    • tasks performed
    • main objectives of experiments
    • approach(es) used and progress beyond state-of-the-art
    • resources employed
    • results obtained
    • analysis of the results
    • perspectives for future work

The paper should not exceed 12 pages, and further instructions on how to write and submit your working notes will be available soon on this page :

Recommended Reading

[1] Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Giulia Boato, Liting Zhou, Cathal Gurrin, "Overview of ImageCLEFlifelog 2017: Lifelog Retrieval and Summarization", CLEF2017 Working Notes, Dublin, Ireland, 2017, vol 1866.

[2] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Håvard Johansen, Hideo Joho, Vivek K Singh, "LTA 2016: The First Workshop on Lifelogging Tools and Applications", ACM Multimedia, Amsterdam, The Netherlands, 2016.

[3] Cathal Gurrin, Xavier Giro-i-Nieto, Petia Radeva, Mariella Dimiccoli, Duc Tien Dang Nguyen, Hideo Joho, "LTA 2017: The Second Workshop on Lifelogging Tools and Applications", ACM Multimedia, Mountain View, CA USA, 2017.

[4] Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, "Overview of NTCIR-12 Lifelog Task", Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, Tokyo, Japan, 2016.

[5] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "Multimodal Retrieval with Diversification and Relevance Feedback for Tourist Attraction Images", ACM Transactions on Multimedia Computing, Communications, and Applications, vol 13, n° 4, 2017.

[6] Duc-Tien Dang-Nguyen, Luca Piras, Giorgio Giacinto, Giulia Boato, Francesco GB De Natale, "A hybrid approach for retrieving diverse social images of landmarks", IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015.

[7] Working notes of the 2015 MediaEval Retrieving Diverse Social Images task,, Vol. 1436, ISSN: 1613-0073.

[8] B. Ionescu, A.L. Gînscă, B. Boteanu, M. Lupu, A. Popescu,H. Müller, Div150Multi: A Social Image Retrieval Result Diversification Dataset with Multi-topic Queries”, ACM MMSys, Klagenfurt, Austria, 2016.

Helpful tools and resources

Eyeaware lifelogging framework;

OpenCV – Open Source Computer Vision;

LIRE: Lucence Image Retrieval;

trec_eval scoring software;

ImageCLEF - Image Retrieval in CLEF;

Weka Data Mining Software;

Nvidia DIGITS;

Caffee deep learning framework;

Creative Commons.


  • When referring to the ImageCLEFlifelog 2018 task general goals, general results, etc. please cite the following publication:
    • Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, and Cathal Gurrin. 2018. Overview of ImageCLEFlifelog 2018: Daily Living Understanding and Lifelog Moment Retrieval. In CLEF2018 Working Notes (CEUR Workshop Proceedings). <>, Avignon, France.
    • BibTex:

        author = {Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Liting Zhou and Mathias Lux and Cathal Gurrin},
        title = {{Overview of ImageCLEFlifelog 2018: Daily Living Understanding and Lifelog Moment Retrieval}},
        booktitle = {CLEF2018 Working Notes},
        series = {{CEUR} Workshop Proceedings},
        year = {2018},
        volume = {},
        publisher = { $<$$>$},
        pages = {},
        month = {September 10-14},
        address = {Avignon, France},
    • When referring to the ImageCLEF 2018 Lab please cite the following publication:
    • @inproceedings{ImageCLEF18,

        author = {Bogdan Ionescu and Henning M\"uller and Mauricio Villegas and Alba Garc\'ia Seco de Herrera and Carsten Eickhoff and Vincent Andrearczyk and Yashin Dicente Cid and Vitali Liauchuk and Vassili Kovalev and Sadid A. Hasan and Yuan Ling and Oladimeji Farri and Joey Liu and Matthew Lungren and Duc-Tien Dang-Nguyen and Luca Piras and Michael Riegler and Liting Zhou and Mathias Lux and Cathal Gurrin},
        title = {{Overview of ImageCLEF 2018}: Challenges, Datasets and Evaluation},
        booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction},
        series = {Proceedings of the Ninth International Conference of the CLEF Association (CLEF 2018)},
        year = {2018},
        volume = {},
        publisher = {{LNCS} Lecture Notes in Computer Science, Springer},
        pages = {},
        month = {September 10-14},
        address = {Avignon, France},


  • Duc-Tien Dang-Nguyen <duc-tien.dang-nguyen(at)>, Dublin City University, Ireland
  • Luca Piras <luca.piras(at)>, Pluribus One & University of Cagliari, Cagliari, Italy
  • Michael Riegler <michael(at)>, University of Oslo, Norway
  • Liting Zhou <zhou.liting2(at)>, Dublin City University, Ireland
  • Mathias Lux <mlux(at)>, Klagenfurt University, Austria
  • Cathal Gurrin <cgurrin(at)>, Dublin City University, Ireland