Show simple item record

Authordc.contributor.authorHerrera, José 
Authordc.contributor.authorParra, Denis 
Authordc.contributor.authorPoblete, Bárbara 
Admission datedc.date.accessioned2020-05-11T22:14:54Z
Available datedc.date.available2020-05-11T22:14:54Z
Publication datedc.date.issued2020
Cita de ítemdc.identifier.citationFuture Generation Computer Systems 105 (2020) 631–649es_ES
Identifierdc.identifier.other10.1016/j.future.2019.12.023
Identifierdc.identifier.urihttps://repositorio.uchile.cl/handle/2250/174658
Abstractdc.description.abstractCommunity Question Answering (cQA) sites have emerged as platforms designed specifically for the exchange of questions and answers among communities of users. Although users tend to find good quality answers in cQA sites, there is evidence that they also engage in a significant volume of QA in other types of social sites, such as microblog platforms. Research indicates that users opt for these non-specific QA social networks because they contain up-to-date information on current events, also due to their rapid information propagation, and social trust. In this sense, we propose that microblog platforms can emerge as a novel, valuable source of information for QA information retrieval tasks. However, we have found that it is not straightforward to transfer existing approaches for automatically retrieving relevant answers in traditional cQA platforms for use in microblogs. This occurs because there are unique characteristics that differentiate microblog data from that of traditional cQA, such as noise and very short text length. In this work, we study (1) if microblog data can be used to automatically provide relevant answers for the QA task, and, in addition, (2) which features contribute the most for finding relevant answers for a particular query. In particular, we introduce a conversation (thread)-level document model, as well as a machine learning ranking framework for microblog QA. We validate our proposal by using factoid-QA as a proxy task, showing that Twitter conversations can indeed be used to automatically provide relevant results for QA. We are able to identify the importance of different features that contribute the most for QA ranking. In addition, we provide evidence that our method allows us to retrieve complex answers in the domain of non-factoid questions.es_ES
Patrocinadordc.description.sponsorshipMillennium Institute for Foundational Research on Data (IMFD), Chile. CONICYT Doctoral Program. Comision Nacional de Investigacion Cientifica y Tecnologica (CONICYT), CONICYT FONDECYT: 1191604, 1191791.es_ES
Lenguagedc.language.isoenes_ES
Publisherdc.publisherElsevieres_ES
Type of licensedc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile*
Link to Licensedc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/*
Sourcedc.sourceFuture Generation Computer Systemses_ES
Keywordsdc.subjectRankinges_ES
Keywordsdc.subjectQuestiones_ES
Keywordsdc.subjectAnsweringes_ES
Keywordsdc.subjectRelevancees_ES
Keywordsdc.subjectMicroblogses_ES
Títulodc.titleSocial QA in non-CQA platformses_ES
Document typedc.typeArtículo de revistaes_ES
dcterms.accessRightsdcterms.accessRightsAcceso Abierto
Catalogueruchile.catalogadorrvhes_ES
Indexationuchile.indexArtículo de publicación ISI
Indexationuchile.indexArtículo de publicación SCOPUS


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 Chile
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Chile