Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://doi.org/10.21256/zhaw-26916
Publikationstyp: Beitrag in wissenschaftlicher Zeitschrift
Art der Begutachtung: Peer review (Publikation)
Titel: JUGE : an infrastructure for benchmarking Java unit test generators
Autor/-in: Devroey, Xavier
Gambi, Alessio
Galeotti, Juan Pablo
Just, René
Kifetew, Fitsum
Panichella, Annibale
Panichella, Sebastiano
et. al: No
DOI: 10.1002/stvr.1838
10.21256/zhaw-26916
Erschienen in: Software Testing, Verification and Reliability
Band(Heft): 33
Heft: 3
Seite(n): e1838
Erscheinungsdatum: 2022
Verlag / Hrsg. Institution: Wiley
ISSN: 0960-0833
1099-1689
Sprache: Englisch
Schlagwörter: Benchmarking; Evaluation infrastructure; JUGE; Unit test generation
Fachgebiet (DDC): 005: Computerprogrammierung, Programme und Daten
Zusammenfassung: Researchers and practitioners have designed and implemented various auto-mated test case generators to support effective software testing. Such genera-tors exist for various languages (e.g., Java, C#, or Python) and variousplatforms (e.g., desktop, web, or mobile applications). The generators exhibitvarying effectiveness and efficiency, depending on the testing goals they aim tosatisfy (e.g., unit-testing of libraries versus system-testing of entire applica-tions) and the underlying techniques they implement. In this context, practi-tioners need to be able to compare different generators to identify the mostsuited one for their requirements, while researchers seek to identify futureresearch directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empiricalevaluations is not trivial and requires substantial effort to select appropriatebenchmarks, setup the evaluation infrastructure, and collect and analyse theresults. In this Software Note, we present ourJUnit Generation BenchmarkingInfrastructure(JUGE) supporting generators (search-based, random-based,symbolic execution, etc.) seeking to automate the production of unit tests forvarious purposes (validation, regression testing, fault localization, etc.). Theprimary goal is to reduce the overall benchmarking effort, ease the comparisonof several generators, and enhance the knowledge transfer between academiaand industry by standardizing the evaluation and comparison process. Since2013, several editions of a unit testing tool competition, co-located with theSearch-Based Software Testing Workshop, have taken place where JUGE wasused and evolved. As a result, an increasing amount of tools (over 10) fromacademia and industry have been evaluated on JUGE, matured over the years,and allowed the identification of future research directions. Based on the expe-rience gained from the competitions, we discuss the expected impact of JUGEin improving the knowledge transfer on tools and approaches for test genera-tion between academia and industry. Indeed, the JUGE infrastructure demon-strated an implementation design that is flexible enough to enable theintegration of additional unit test generation tools, which is practical for devel-opers and allows researchers to experiment with new and advanced unit testingtools and approaches.
URI: https://digitalcollection.zhaw.ch/handle/11475/26916
Zugehörige Forschungsdaten: https://doi.org/10.5281/zenodo.4904393
Volltext Version: Akzeptierte Version
Lizenz (gemäss Verlagsvertrag): Lizenz gemäss Verlagsvertrag
Gesperrt bis: 2023-12-20
Departement: School of Engineering
Organisationseinheit: Institut für Informatik (InIT)
Publiziert im Rahmen des ZHAW-Projekts: COSMOS – DevOps for Complex Cyber-physical Systems of Systems
Enthalten in den Sammlungen:Publikationen School of Engineering

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
2022_Devroey-etal_JUGE-Java-unit-test-generator-benchmarking-infrastructure.pdf477.51 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Langanzeige
Devroey, X., Gambi, A., Galeotti, J. P., Just, R., Kifetew, F., Panichella, A., & Panichella, S. (2022). JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability, 33(3), e1838. https://doi.org/10.1002/stvr.1838
Devroey, X. et al. (2022) ‘JUGE : an infrastructure for benchmarking Java unit test generators’, Software Testing, Verification and Reliability, 33(3), p. e1838. Available at: https://doi.org/10.1002/stvr.1838.
X. Devroey et al., “JUGE : an infrastructure for benchmarking Java unit test generators,” Software Testing, Verification and Reliability, vol. 33, no. 3, p. e1838, 2022, doi: 10.1002/stvr.1838.
DEVROEY, Xavier, Alessio GAMBI, Juan Pablo GALEOTTI, René JUST, Fitsum KIFETEW, Annibale PANICHELLA und Sebastiano PANICHELLA, 2022. JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability. 2022. Bd. 33, Nr. 3, S. e1838. DOI 10.1002/stvr.1838
Devroey, Xavier, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Kifetew, Annibale Panichella, and Sebastiano Panichella. 2022. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability 33 (3): e1838. https://doi.org/10.1002/stvr.1838.
Devroey, Xavier, et al. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability, vol. 33, no. 3, 2022, p. e1838, https://doi.org/10.1002/stvr.1838.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.