CLEF eHealth 2016 – Task 3

Patient-Centred Information Retrieval

The 2016 CLEF eHealth Information Retrieval Task aims to evaluate the effectiveness of information retrieval systems when searching for health content on the web, with the objective to foster research and development of search engines tailored to health information seeking.

This task is a continuation of the previous CLEF eHealth information retrieval (IR) tasks that ran in 20132014 and 2015, and embraces the TREC-style evaluation process, with a shared collection of documents and queries, the contribution of runs from participants and the subsequent formation of relevance assessments and evaluation of the participants submissions.

This year’s IR task will continue the growth path identified in 2014 and 2015’s CLEF eHealth information retrieval challenges. The 2016 task uses a new web corpus (ClueWeb12 B13), which is more representative of the current state of health information online. This year we extract topic stories and generate the associated (English) queries by mining health web forums to identify example information needs to be used in the task.

The task is structured into three subtasks:

IRTask 1:  ad-hoc search 

Queries for this task are generated by mining health web forums to identify example information needs. This task extends the evaluation framework used in 2015 (which considered, along with topical relevance, also the readability of the retrieved documents) to consider further dimensions of relevance such as the reliability of the retrieved information.

IRTask 2: query variation

This task explores query variations for an information need. Different query variants are generated for the same forum entry, thus capturing the variability intrinsic in how people search when they have the same information need. Participants should take these variations into account when building their systems: participants will be told which queries relate to the same information need and they have to produce one set of results to be used as answer for all query variations of an information need. We aim to foster research into building systems that are robust to query variations.

IRTask 3: multilingual search

This task, similar to last year, offers parallel queries in several languages (Czech, French, Hungarian, German, Polish and Swedish).

Registration


Please register on the main CLEF 2016 registration page.

Dataset

ClueWeb 12 B13 will be used this year. If your organization does not have access to this dataset, you can still participate in our task by accessing the infrastructure kindly provided by Microsoft Azure. Nevertheless, every participant should have a licence to use ClueWeb: it is free of charge. This licence can be obtained after filling up and signing every page of the ClueWeb organizational agreement and sending it to lupu@ifs.tuwien.ac.at. Your Azure account will be created only after we receive a confirmation that the license has been obtained.

To summarize, the following alternative avenues are available to work on the ClueWeb12 dataset:
(A) You already have ClueWeb12: you do not need to acquire a new copy or complete other licensing forms. Please ensure you are using the ClueWeb 12 B13 version.
(B) You do not have ClueWeb12 and want to obtain an offline copy for your organisation: purchase the dataset from the ClueWeb website.
(C) You do not have ClueWeb12 and you want online access to the dataset: we are providing access to an Azure instance where participants can access ClueWeb12 B13. This access is available only until the task is completed (i.e., until the CLEF 2016 conference). To access this resource, you need to complete a ClueWeb12 license (free of charge) and email it to Mihai Lupu. Note that it will take up to three weeks to approve your license, thus please apply for access to the collection now.

Note: The Azure instance we are making available to participants includes (1) the dataset, (2) standard indexes built with the Terrier tool and the Indri tool, (3) additional resources such as a spam list, anchor texts, urls, etc. made available through the ClueWeb12 website.

Queries

This year’s queries explore real health consumer posts from health web forums.We extracted posts from the ‘askDocs‘ forum of Reddit, and presented them to query generators. Query generators had to create queries based on what they read in the initial user post. It is expected that different query creators will generate different queries for the same post. 
For IRTask 1, participants should treat each query individually, submitting the returned documents for each query. For IRTask 2, participants should submit results for each group of queries of a post, i.e., take advantage of all query variations for the same post. Below we show some example queries:
<queries>
<query>    <id>900001</id>    <title>medicine for nasal drip</title></query><query>    <id>900002</id>    <title>bottle neck and nasal drip medicine</title></query>….  <query>    <id>904001</id>    <title>omeprazole side effect</title>  </query>….</queries>
The first 3 digits of a query id identify a post number, while the last 3 digits of query id identify each individual query. In the example above, we show queries 1 and 2 created for post 900 and query 1 created for post 904. 
<<Here>> you can find a large number of example queries for this year’s task.
The English test queries are available <<HERE>>. They were created by 6 query creators with different medical expertise.Note that, when appropriated, we fixed misspellings found by the Linux program ‘aspell’. However, we did not remove punctuation marks from the queries. You might want to process the queries to remove punctuation marks.
For IRTask 3, we provide query translations to CzechFrenchHungarianGermanPolish and Swedish.

Evaluation

System evaluation will consider P@5, P@10, NDCG@5, NDCG@10 (main measure), which can be computed with trec_eval evaluation tool, available at http://trec.nist.gov/trec_eval/.

Similar to in CLEF eHealth 2015, the 2016 challenge also uses evaluation measures that combine assessments of topical relevance with readability of the medical content, as suggested in Zuccon&Koopman. Information about this measure and associated toolkit has been included in the evaluation package (which will be shared with registered participants).
We are also working on other new evaluation measures <details to follow soon>

Timeline

  • Collection release: begin March 2016
  • Example queries release: mid March 2016
  • English Test queries release: May 4th 2016
  • Multilingual Test queries release: May 9th 2016
  • System Submission: May 20th 2016
  • Working Notes Submission: May 25th 2016

Contact Information

The best (and maybe the fastest) way to get your questions answered is joining one of the clef-ehealth mailing lists:

Guidelines and Submission Details

This document details the submission procedure and how your results will be judged: <<LINK>>

Runs should be submitted using the EasyChair system at: https://easychair.org/conferences/?conf=clefehealth2016runs

Working notes papers should be submitted using the EasyChair system at: https://www.easychair.org/conferences/?conf=clef2016

Further Reading

Task overview paper for 2015

Task overview paper for 2014

Task overview paper for 2013

Participants Working Notes 2015

Participants Working Notes 2014

Participants Working Notes 2013