Task 1 – Information Visualisation
Task 1: Visual-Interactive Search and Exploration of eHealth Data
The overall problem of this task is to help patients (or their next-of-kin) to better understand their health information.
The CLEFeHealth2013 tasks 1-3 (overview, participant working notes) developed and evaluated automated approaches for discharge summaries:
1. terminology standardisation for medical disorders (e.g., heartburn as opposed to gastroesophageal reflux disease),
2. shorthand expansion (e.g., heartburn as opposed to GERD), and
3. text linkage with further information available on the Internet (e.g., care guidelines for heartburn).
With the 2014 Task 1, we challenge participants to design interactive visualisations that help patients better understand their discharge summaries and explore additional relevant documents in light of a large document corpus and their various facets in context.
As a scenario, assume an English-speaking, discharged patient (or her next of kin) is in her home in the USA who wants to learn about the clinical treatment history and implications for future behavior, possible symptoms or developments, and situational awareness related to their own health and healthcare in general. That is, targeted users are layperson patients (as opposed to clinical experts).
We ask participants to design an interactive visual representation of the discharge summary and potentially relevant documents available on the Internet. The goal of this tool will be to provide an effective, usable, and trustworthy environment for navigating, exploring, and interpreting both the discharge summary and the Internet documents, as needed to promote understanding and informed decision-making.
We assume a standard application environment is given, including a networked desktop system and mobile device (e.g., smartphone or tablet). The challenge is structured into two different but connected Tasks (1a and 1b) which participants can chose to work on separately, or address together in an integrated Task (grand challenge).
The input data provided to participants consists of six carefully chosen cases from our 2013 challenge. Using the first case is mandatory for all participants and the other five cases are optional. Each case includes a discharge summary, including the disorder spans marked and mapped to SNOMED-CT (Systematized Nomenclature of Medicine Clinical Terms, Concept Unique Identifiers), and the shorthand spans marked and mapped to the UMLS (Unified Medical Language System). Each discharge summary is also associated with a profile (e.g., “A forty year old woman, who seeks information about her condition” for the mandatory case) to describe the patient, a narrative to describe her information need (e.g., “description of what type of disease hypothyreoidism is”), a query to address this information need by searching the Internet documents, and the list of the documents that were judged as relevant to the query. Each query consists of a description (e.g., “What is hypothyreoidism”) and title (e.g., “Hypothyreoidism”).
Solutions should provide a prototype that demonstrates the effectiveness of the proposed solutions. Although functioning prototypes are preferred, paper, mock screenshots or other low-fidelity prototypes are also acceptable.
The subpages of this page define the problems and outline requirements and evaluation criteria for this task in more detail.
Task 1 Dataset
The input data provided to participants consists of six carefully chosen cases from the CLEFeHealth2013 tasks. Using the first case is mandatory for all participants and the other five cases are optional. Each case includes a discharge summary, including the disorder spans marked and mapped to SNOMED-CT (Systematized Nomenclature of Medicine Clinical Terms, Concept Unique Identifiers), and the shorthand spans marked and mapped to the UMLS (Unified Medical Language System). Each discharge summary is also associated with a profile (e.g., “A forty year old woman, who seeks information about her condition” for the mandatory case) to describe the patient, a narrative to describe her information need (e.g., “description of what type of disease hypothyreoidism is”), a query to address this information need by searching the Internet documents, and the list of the documents that were judged as relevant to the query. Each query consists of a description (e.g., “What is hypothyreoidism”) and title (e.g., “Hypothyreoidism”).
Cases
Case 1 (mandatory)
- 1. Patient profile: This 55-year old woman with a chronic pancreatitis is worried that her condition is getting worse. She wants to know more about jaundice and her condition
- 2. De-identified discharge summary
- 3. Information need: chronic alcoholic induced pancreatitis and jaundice in connection with it
- 4. Query: is jaundice an indication that the pancreatitis has advanced
- a.Title: chronic alcoholic induced pancreatitis and jaundice
Case 2 (optional)
- 1. Patient profile: A forty year old woman, who seeks information about her condition
- 2. De-identified discharge summary
- 3. Information need: description of what type of disease hypothyreoidism is
- 4. Query: What is hypothyreoidism
- a.Title: Hypothyreoidism
Case 3 (optional)
- 1. Patient profile: This 50-year old female is worried about what is MI, that her father has and is this condition hereditary. She does not want additional trouble on top of her current illness
- 2. De-identified discharge summary
- 3. Information need: description of what type of disease hypothyreoidism is
- 4. Query: MI
- a.Title: MI and hereditary
Case 4 (optional)
- 1. Patient profile: This 87-year old female has had several incidences of abdominal pain with no clear reason. The family now wants to seek information about her bruises and raccoon eyes. Could they be a cause of some blood disease
- 2. De-identified discharge summary
- 3. Information need: can bruises and raccoon eyes be symptoms of blood disease
- 4. Query: bruises and raccoon eyes and blood disease
- a.Title: bruises and raccoon eyes and blood disease
Case 5 (optional)
- 1. Patient profile: A 60-year-old male who knows that helicobacter pylori is causing cancer and now wants to know if his current abdominal pain could be a symptom of cancer
- 2. De-identified discharge summary
- 3. Information need: is abdominal pain due to helicobacter pylori a symptom of cancer
- 4. Query: cancer, helicobacter pylori and abdominal pain
- a.Title: abnominal pain and helicobacter pylori and cancer
Case 6 (optional)
- 1. Patient profile: A 43-year old male with down Syndrome lives in an extended care facility. The personnel wants to know if they can avoid frothy sputum in connection with the patient’s chronic aspiration and status post laryngectomy
- 2. De-identified discharge summary
- 3. Information need: how to avoid frothy sputum
- 4. Query: frothy sputum and how to avoid and care for this condition
- a.Title: frothy sputum and care
Discharge Summaries
After the participants have completed the registration and data agreement steps, they will receive the set of 6 de-identified discharge summaries, including the disorder spans marked and mapped to SNOMED-CT (Systematized Nomenclature of Medicine Clinical Terms, Concept Unique Identifiers), and the shorthand spans marked and mapped to the UMLS (Unified Medical Language System).
Query Set
The CLEF eHealth 2013 Task 3 data set consisted of a set of 50 real patient queries generated from discharge summaries, a set of in the order of 1 million health-related documents (web pages) that the queries can be searched on, and a list of the documents which were judged to be relevant to each of the queries (named result set). In 2014, we use the aforementioned 6 query cases.
The queries have been manually generated by healthcare professionals from a manually extracted set of highlighted disorders from the discharge summaries. A mapping between each query and the associated matching discharge summary (from which the disorder was taken) is provided.
Queries are distributed for use in an extended TREC style format, where title, description and narrative are as in the classic format and the additional fields are as follows:
1. discharge_summary: matching discharge summary, and
2. profile: details about the patient extracted, or inferred, from the discharge summary (which is required for determining the information which is being sought by the patient).
Document Set
- 1. The web pages that were judged for relevance for each of the 6 queries are provided in a set of .dat files.
- 2. Each .dat file contains a collection of web pages and metadata, where the data for one web page is organised as follows:
- a. a unique identifier (#UID) for a web page in this document collection,
- b. the date of crawl in the form YYYYMM (#DATE),
- c. the URL (#URL) to the original web page, and
- d. the raw HTML content (#CONTENT) of the web page.
A short example illustrates the structure of a .dat file:
#UID:acidr1783_12_000001
#DATE:201204-06
#URL:http://www.acidreflux-heartburn-gerd.net
#CONTENT:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
.
<body>
.
<h2 class="graytext">Children's Reflux and Infant Reflux</h2>
<p class="tighterleading"><a href="/acidreflux/children.html"><strong>Children and Acid Reflux</strong></a><br /> Children experiencing reflux can exhibit typical symptoms, such as heartburn and regurgitation, or atypical symptoms...</p>
.
.
</body>
</html>
#EOR
#UID:acidr1783_12_000002
#DATE:201204-06
#URL:http://www.acidreflux-heartburn-gerd.net/News/beatheartburn.html
#CONTENT:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
.
.
<li><a href="/acidreflux/nighttimeacidreflux.html">nighttime acid reflux<br /> </a></li>
</ul>
<h3><a href="../heartburn/index.html">Heartburn</a></h3>
<ul class="menulist">
<li><a href="../heartburn/acidheartburn.html">acid heartburn</a></li>
<li><a href="../heartburn/heartburn_remedies.html">heartburn remedies</a></li>
.
.
</html>
#EOR
Result Set
Relevance assessment was performed by medical professionals. Relevance is provided on a 2-point-scale: Non relevant (0); Relevant (1). The relevance assessments are provided in a file in the standard TREC qrel format. Extract from the provided file is:
qtest1 0 atlas0954_12_001451 0
qtest1 0 atlas0954_12_001673 0
qtest1 0 atlas0954_12_001766 0
qtest1 0 atlas0954_12_002713 0
qtest1 0 atlas0954_12_002762 1
qtest1 0 atlas0954_12_002793 0
qtest1 0 atlas0954_12_002799 0
qtest1 0 clini0836_12_016941 0
qtest1 0 clini0836_12_016942 0
qtest1 0 clini0836_12_044473 0
Here, and of interest in this Task 1, the first column refers to the query number, the third column refers to the document ID, and the fourth column indicates if the document is relevant (1) or not relevant (0) to the query.
Obtaining Task 1 Dataset
To participate, you must first register to CLEF2014. The registration page will be opened in November 2013 and we will provide its link here (as well as on the registration page). After we have received your registration, we will email you further guidelines about gaining access to Task 1 data.
Task 1 Guidelines
Submissions
Participants are given an option to submit to two evaluations:
1. By Feb 1, 2014 (optional): drafts for comments. Based on this submission, we will provide participants comments that may help them to prepare their final submission. We encourage all participants to submit this draft, but this submission is not mandatory.
2. By May 1, 2014: final submissions to be evaluated. This submission will be used to determine the final evaluation results.
All submissions need to be entered via the official EasyChair system (https://www.easychair.org/conferences/?conf=clefehealth2014) on the Internet. The system was opened in January 2014.
Final submissions need to encompass the following mandatory items:
1. A concise report of the design, implementation (if applicable), and application results discussion in form of an extended abstract. The abstract needs to highlight the obtained findings, possibly supported by an informal user study or other means of validation. This abstract is due by 1 May 2014 and the teams can use this document as a basis for their working notes (due by 15 June, 2014), if they wish. For submissions that address only Task 1a or 1b, please do not exceed 10 pages of text and figures that illustrate the design and interaction on the CLEF2013 abstract template. For submissions that address the grand challenge, please follow the same guideline but do not exceed 20 pages.
2. Two demonstration videos illustrating the relevant functionality of the functional design or paper prototype in application to the provided task data. In the first video, the user should be from the development team (i.e., a person who knows functionality). In the second video, the user should be a novice, that is, a person with no previous experience from using the functionality and the video should also explain how the novice was trained to use the functionality. Each video should be from 5 to 7.5 minutes for submissions that address only Task 1a or 1b, and from 10 to 15 minutes for submissions that address the grand challenge.
In addition to their actual submission, all participating teams are asked to provide us the following mandatory items:
1. a response to our online survey that we will use to characterise teams and their submissions,
2. an extended abstract, which summarises the main details and results of the experiments (to produce the CLEF 2014 Book of Abstracts), and3. a report (working notes) describing the experiments (to be published on the Internet).
Further details about the survey, extended abstracts, and reports will be provided here in May 2014.As the Feb 1 submission, we ask teams to submit their draft of the aforementioned concise report.
Evaluation Criteria
Solutions need to address the Task problems by appropriate visual-interactive design and need to demonstrate its effectiveness. The problems are deliberatively defined in a creative way and involve visual interactive design and ideally, a combination of automatic, visual and interactive techniques. Participants are encouraged to implement prototypical solutions, but also pure designs without implementation are allowed.
Submissions will be judged towards their rationale for the design, including selection of appropriate visual interactive data representations and reference to state of the art techniques in information visualisation, natural language processing, information retrieval, machine learning, and document visualisation. They need to
1. demonstrate that the posed problems are addressed, in the sense that the layperson patient is helped in their complex information need,
2. provide a compelling use-case driven discussion of the workflow supported and exemplary results obtained,
3. highlight the evaluation approach and obtained findings.
Each final submission will be assessed by a team of at least three evaluation panelists, supported by one member of our organising committee and one peer from the other participating teams. The panel members are renowned experts in patient-centric healthcare, information visualisation, software design, machine learning, and natural language processing. They represent academic, industrial, and governmental sectors as well as healthcare practice in different countries all over the world. The panelists are neither members of our organising committee nor participants. Consequently, the list of panelist will be released after the submission deadline in May, 2014.
The team members will be guided to use our evaluation criteria in their assessment. Primary evaluation criteria include the effectiveness and originality of the presented submissions. Submissions will be judged on Usability, Visualisation, Interaction, and Aesthetics. The judges will be provided with a 5-point Likert scale for each heuristic and will also be requested to discuss the reasons behind their scores.
The final score will be the average of the Likert scores and the sum of unique problems. Our categories are based on the literature (Forsell & Johansson, 2010: “An Heuristic Set for Evaluation in Information Visualization”, AVI 2010, DOI: 10.1145/1842993.1843029) and adjusted for the present tasks and prototype access (i.e., the videos).
Usability Heuristics
1. Minimal Actions : whether the number of steps needed to get to the solution is acceptable,
2. Flexibility: whether there is an easy/obvious way to proceed to the next/other task, and
3. Orientation and Help: easy of undoing actions, going back to main screen and finding help.
Visualisation Heuristics
1. Information Encoding: whether the necessary/required info is shown,
2. Dataset Reduction: whether the required info is easy to digest,
3. Recognition rather than recall.: Users should not remember or memorize information to carry out tasks or understand information presented,
4. Spatial Organization: layout, efficient use of space, and
5. Remove the extraneous: looks uncluttered.
Recognitions will be given to the best submissions along a number of categories depending on the field of all submissions. Prospective categories include but are not limited to:
1. Effective use of visualisation,
2. Effective use of interaction,
3. Effective combination of interactive visualization with computational analysis,
4. Solution adapting to different environments (e.g., desktop, mobile/tablet or print for presentation),
5. Best use of external information resources (e.g., Wikipedia, Social Media, Flickr, or Youtube),
6. Best solution for Task 1a,
7. Best solution for Task 1b,
8. Best solution for Grand Challenge, and
9. Best integration of external information resources.
Task 1 Getting Started
Registration
To participate, you must first register to CLEF2014 on this page: http://147.162.2.122:8888/clef2014labs/. After we have received your registration, we will email you further guidelines about gaining an access to Task 1 data.
Starting Points
Starting points for consideration for a solution include state of the art techniques from interactive data visualisation, document and network visualisation, multimedia visualisation, and related fields. Following the well-known overview-first, detail-on-demand paradigm is recommended as a good starting point for participants. Implementations can be done in any appropriate environment, including using native languages such as JAVA or using packages such as D3.
Recommended reading:
- Requirements and examples of features for explorative search systems
- Problems and approaches for integration of automatic and visual-interactive data analysis methods
- Health Design Challenge 2013
- IEEE VIS Workshop on Public Health’s Wicked Problems
Examples
We provide the following examples to inspire all participants. However, these are not model solutions. They are intended to inspire critical thinking and new ideas.
Task 1a. Discharge resolution challenge
Task 1b. Visual exploration challenge
Task 1. Grand challenge of integrated 1a and 1b