Learning from source code history to identify performance failures
Author
dc.contributor.author
Sandoval Alcocer, Juan
Author
dc.contributor.author
Bergel, Alexandre
Author
dc.contributor.author
Valente, Marco Tulio
Admission date
dc.date.accessioned
2018-11-16T12:13:28Z
Available date
dc.date.available
2018-11-16T12:13:28Z
Publication date
dc.date.issued
2016
Cita de ítem
dc.identifier.citation
En: ICPE '16 Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering
Pages 37-48. Delft, The Netherlands — March 12 - 16, 2016
es_ES
Identifier
dc.identifier.other
10.1145/2851553.2851571
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/152651
Abstract
dc.description.abstract
Source code changes may inadvertently introduce performance
regressions. Benchmarking each software version is
traditionally employed to identify performance regressions.
Although e↵ective, this exhaustive approach is hard to carry
out in practice. This paper contrasts source code changes
against performance variations. By analyzing 1,288 software
versions from 17 open source projects, we identified 10 source
code changes leading to a performance variation (improvement
or regression). We have produced a cost model to
infer whether a software commit introduces a performance
variation by analyzing the source code and sampling the
execution of a few versions. By profiling the execution of
only 17% of the versions, our model is able to identify 83%
of the performance regressions greater than 5% and 100% of
the regressions greater than 50%.