Show simple item record

Authordc.contributor.authorHerrera, Jose 
Authordc.contributor.authorPoblete Labra, Bárbara 
Authordc.contributor.authorParra, Denis 
Admission datedc.date.accessioned2019-05-31T15:19:04Z
Available datedc.date.available2019-05-31T15:19:04Z
Publication datedc.date.issued2018
Cita de ítemdc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volumen 10772 LNCS, 20108
Identifierdc.identifier.issn16113349
Identifierdc.identifier.issn03029743
Identifierdc.identifier.other10.1007/978-3-319-76941-7_38
Identifierdc.identifier.urihttps://repositorio.uchile.cl/handle/2250/169308
Abstractdc.description.abstractCommunity Question Answering (cQA) sites have emerged as platforms designed specifically for the exchange of questions and answers among users. Although users tend to find good quality answers in cQA sites, they also engage in a significant volume of QA interactions in other platforms, such as microblog networking sites. This in part is explained because microblog platforms contain up-to-date information on current events, provide rapid information propagation, and have social trust. Despite the potential of microblog platforms, such as Twitter, for automatic QA retrieval, how to leverage them for this task is not clear. There are unique characteristics that differentiate Twitter from traditional cQA platforms (e.g., short message length, low quality and noisy information), which do not allow to directly apply prior findings in the area. In this work, we address this problem by studying: 1) the feasibility of Twitter as a QA platform and 2) the discriminating features that identify relevant answers to a particular query. In particular, we create a document model at conversation-thread level, which enables us to aggregate microblog information, and set up a learning-to-rank framework, using factoid QA as a proxy task. Our experimental results show microblog data can indeed be used to perform QA retrieval effectively. We identify domain-specific features and combinations of those features that better account for improving QA ranking, achieving a MRR of 0.7795 (improving 62% over our baseline method). In addition, we provide evidence that our method allows to retrieve complex answers to non-factoid questions.
Lenguagedc.language.isoen
Publisherdc.publisherSpringer Verlag
Type of licensedc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile
Link to Licensedc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
Sourcedc.sourceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Keywordsdc.subjectLearning-to-rank
Keywordsdc.subjectQuestion answering
Keywordsdc.subjectTwitter
Títulodc.titleLearning to leverage microblog information for QA retrieval
Document typedc.typeArtículo de revista
Catalogueruchile.catalogadorjmm
Indexationuchile.indexArtículo de publicación SCOPUS
uchile.cosechauchile.cosechaSI


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 Chile
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Chile