Skip to content

The medical question entailment data introduced in the AMIA 2016 Paper (Recognizing Question Entailment for Medical Question Answering)

Notifications You must be signed in to change notification settings

abachaa/RQE_Data_AMIA2016

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

----------------------------------------------------------------
Recognizing Question Entailment for Medical Question Answering 
Asma Ben Abacha & Dina Demner-Fushman
AMIA 2016
----------------------------------------------------------------

Recognizing Question Entailment (RQE) Datasets:

- RQE Training Dataset: A collection of 8,588 clinical question-question pairs.
- RQE Test Dataset: A collection of 302 medical pairs of NLM-questions and NIH-FAQs.


If you use these datasets, please cite the following paper: 

	Recognizing Question Entailment for Medical Question Answering. Asma Ben Abacha and Dina Demner-Fushman. AMIA Annual Symposium proceedings, pages 310-318, 2016. 
	https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5333286/

	@inproceedings{RQE:AMIA16,
		   author    = {Asma {Ben Abacha} and
			      Dina Demner{-}Fushman},
		   title     = {Recognizing Question Entailment for Medical Question Answering},
		   booktitle = {AMIA 2016, American Medical Informatics Association Annual Symposium, Chicago, IL, USA, November 12-16, 2016},
		   year      = {2016},
		   url       = {https://lhncbc.nlm.nih.gov/publication/pub9500},
		   abstract = {With the increasing heterogeneity and specialization of medical texts, automated question answering is becoming more and more challenging. In this context, answering a given medical question by retrieving similar questions that are already answered by human experts seems to be a promising solution. In this paper, we propose a new approach for the detection of similar questions based on Recognizing Question Entailment (RQE). In particular, we consider Frequently Asked Question (FAQs) as a valuable and widespread source of information. Our final goal is to automatically provide an existing answer if FAQ similar to a consumer health question exists. We evaluate our approach using consumer health questions received by the National Library of Medicine and FAQs collected from NIH websites. Our first results are promising and suggest the feasibility of our approach as a valuable complement to classic question answering approaches.}}



About

The medical question entailment data introduced in the AMIA 2016 Paper (Recognizing Question Entailment for Medical Question Answering)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published