Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://doi.org/10.21256/zhaw-25672
Publikationstyp: Beitrag in wissenschaftlicher Zeitschrift
Art der Begutachtung: Peer review (Publikation)
Titel: Test smells 20 years later : detectability, validity, and reliability
Autor/-in: Panichella, Annibale
Panichella, Sebastiano
Fraser, Gordon
Sawant, Anand
Hellendoorn, Vincent
et. al: No
DOI: 10.1007/s10664-022-10207-5
10.21256/zhaw-25672
Erschienen in: Empirical Software Engineering
Band(Heft): 27
Heft: 7
Seite(n): 170
Erscheinungsdatum: 2022
Verlag / Hrsg. Institution: Springer
ISSN: 1382-3256
1573-7616
Sprache: Englisch
Schlagwörter: Test generation; Test smell; Software quality
Fachgebiet (DDC): 005: Computerprogrammierung, Programme und Daten
Zusammenfassung: Test smells aim to capture design issues in test code that reduces its maintainability. These have been extensively studied and generally found quite prevalent in both human-written and automatically generated test-cases. However, most evidence of prevalence is based on specific static detection rules. Although those are based on the original, conceptual definitions of the various test smells, recent empirical studies indicate that developers perceive warnings raised by detection tools as overly strict and non-representative of the maintainability and quality of test suites. This leads us to re-assess test smell detection tools’ detection accuracy and investigate the prevalence and detectability of test smells more broadly. Specifically, we construct a hand-annotated dataset spanning hundreds of test suites both written by developers and generated by two test generation tools (EvoSuite and JTExpert) and performed a multistage, cross-validated manual analysis to identify the presence of six types of test smells in these. We then use this manual labeling to benchmark the performance and external validity of two test smell detection tools – one widely used in prior work and one recently introduced with the express goal to match developer perceptions of test smells. Our results primarily show that the current vocabulary of test smells is highly mismatched to real concerns: multiple smells were ubiquitous on developer-written tests but virtually never correlated with semantic or maintainability flaws; machine-generated tests actually often scored better, but in reality, suffered from a host of problems not wellcaptured by current test smells. Current test smell detection strategies poorly characterized the issues in these automatically generated test suites; in particular, the older tool’s detection strategies misclassified over 70% of test smells, both missing real instances (false negatives) and marking many smell-free tests as smelly (false positives). We identify common patterns in these tests that can be used to improve the tools, refine and update the definition of certain test smells, and highlight as of yet uncharacterized issues. Our findings suggest the need for (i) more appropriate metrics to match development practice, (ii) more accurate detection strategies to be evaluated primarily in industrial contexts.
Weitere Angaben: Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)
URI: https://digitalcollection.zhaw.ch/handle/11475/25672
Zugehörige Forschungsdaten: https://zenodo.org/record/3337892#.XswWby-w3yU
Volltext Version: Publizierte Version
Lizenz (gemäss Verlagsvertrag): CC BY 4.0: Namensnennung 4.0 International
Departement: School of Engineering
Organisationseinheit: Institut für Informatik (InIT)
Publiziert im Rahmen des ZHAW-Projekts: COSMOS – DevOps for Complex Cyber-physical Systems of Systems
Enthalten in den Sammlungen:Publikationen School of Engineering

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
2022_Panichella-etal_Test-smells-20-years-later.pdfPublished Version3.27 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen
2022_Panichella-etal_Test-smells-20-years-later_EMSE.pdfSubmitted Version420.55 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Langanzeige
Panichella, A., Panichella, S., Fraser, G., Sawant, A., & Hellendoorn, V. (2022). Test smells 20 years later : detectability, validity, and reliability. Empirical Software Engineering, 27(7), 170. https://doi.org/10.1007/s10664-022-10207-5
Panichella, A. et al. (2022) ‘Test smells 20 years later : detectability, validity, and reliability’, Empirical Software Engineering, 27(7), p. 170. Available at: https://doi.org/10.1007/s10664-022-10207-5.
A. Panichella, S. Panichella, G. Fraser, A. Sawant, and V. Hellendoorn, “Test smells 20 years later : detectability, validity, and reliability,” Empirical Software Engineering, vol. 27, no. 7, p. 170, 2022, doi: 10.1007/s10664-022-10207-5.
PANICHELLA, Annibale, Sebastiano PANICHELLA, Gordon FRASER, Anand SAWANT und Vincent HELLENDOORN, 2022. Test smells 20 years later : detectability, validity, and reliability. Empirical Software Engineering. 2022. Bd. 27, Nr. 7, S. 170. DOI 10.1007/s10664-022-10207-5
Panichella, Annibale, Sebastiano Panichella, Gordon Fraser, Anand Sawant, and Vincent Hellendoorn. 2022. “Test Smells 20 Years Later : Detectability, Validity, and Reliability.” Empirical Software Engineering 27 (7): 170. https://doi.org/10.1007/s10664-022-10207-5.
Panichella, Annibale, et al. “Test Smells 20 Years Later : Detectability, Validity, and Reliability.” Empirical Software Engineering, vol. 27, no. 7, 2022, p. 170, https://doi.org/10.1007/s10664-022-10207-5.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.