|Publication type:||Article in scientific journal|
|Type of review:||Peer review (publication)|
|Title:||Using code reviews to automatically configure static analysis tools|
Di Penta, Massimiliano
|Published in:||Empirical Software Engineering|
|Publisher / Ed. Institution:||Springer|
|Subjects:||Static analysis tool; Code review; Automated tool configuration|
|Subject (DDC):||005: Computer programming, programs and data|
|Abstract:||Developers often use Static Code Analysis Tools (SCAT) to automatically detect different kinds of quality flaws in their source code. Since many warnings raised by SCATs may be irrelevant for a project/organization, it can be possible to leverage information from the project development history, to automatically configure which warnings a SCAT should raise, and which not. In this paper, we propose an automated approach (Auto-SCAT) to leverage (statement-level) code review comments for recommending SCAT warnings, or warning categories, to be enabled. To this aim, we trace code review comments onto SCAT warnings by leveraging their descriptions and messages, as well as review comments made in other different projects. We apply Auto-SCAT to study how CheckStyle, a well-known SCAT, can be configured in the context of six Java open source projects, all using Gerrit for handling code reviews. Our results show that, Auto-SCAT is able to classify code review comments into CheckStyle checks with a precision of 61% and a recall of 52%. While considering also the code review comments not related to CheckStyle warnings Auto-SCAT has a precision and a recall of ≈ 75%. Furthermore, Auto-SCAT can configuring CheckStyle with a precision of 72.7% at checks level and a precision of 96.3% at category level. Finally, our findings highlight that Auto-SCAT outperforms state-of-art baselines based on default CheckStyle configurations, or leveraging the history of previously-removed warnings.|
|Further description:||Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)|
|Fulltext version:||Published version|
|License (according to publishing contract):||Licence according to publishing contract|
|Departement:||School of Engineering|
|Organisational Unit:||Institute of Applied Information Technology (InIT)|
|Published as part of the ZHAW project:||COSMOS – DevOps for Complex Cyber-physical Systems of Systems|
|Appears in collections:||Publikationen School of Engineering|
Files in This Item:
There are no files associated with this item.
Show full item record
Zampetti, F., Mudbhari, S., Arnaoudova, V., Di Penta, M., Panichella, S., & Antoniol, G. (2021). Using code reviews to automatically configure static analysis tools. Empirical Software Engineering, 27(1), 28. https://doi.org/10.1007/s10664-021-10076-4
Zampetti, F. et al. (2021) ‘Using code reviews to automatically configure static analysis tools’, Empirical Software Engineering, 27(1), p. 28. Available at: https://doi.org/10.1007/s10664-021-10076-4.
F. Zampetti, S. Mudbhari, V. Arnaoudova, M. Di Penta, S. Panichella, and G. Antoniol, “Using code reviews to automatically configure static analysis tools,” Empirical Software Engineering, vol. 27, no. 1, p. 28, 2021, doi: 10.1007/s10664-021-10076-4.
Zampetti, Fiorella, et al. “Using Code Reviews to Automatically Configure Static Analysis Tools.” Empirical Software Engineering, vol. 27, no. 1, 2021, p. 28, https://doi.org/10.1007/s10664-021-10076-4.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.