Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-26916
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDevroey, Xavier-
dc.contributor.authorGambi, Alessio-
dc.contributor.authorGaleotti, Juan Pablo-
dc.contributor.authorJust, René-
dc.contributor.authorKifetew, Fitsum-
dc.contributor.authorPanichella, Annibale-
dc.contributor.authorPanichella, Sebastiano-
dc.date.accessioned2023-02-11T10:22:02Z-
dc.date.available2023-02-11T10:22:02Z-
dc.date.issued2022-
dc.identifier.issn0960-0833de_CH
dc.identifier.issn1099-1689de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/26916-
dc.description.abstractResearchers and practitioners have designed and implemented various auto-mated test case generators to support effective software testing. Such genera-tors exist for various languages (e.g., Java, C#, or Python) and variousplatforms (e.g., desktop, web, or mobile applications). The generators exhibitvarying effectiveness and efficiency, depending on the testing goals they aim tosatisfy (e.g., unit-testing of libraries versus system-testing of entire applica-tions) and the underlying techniques they implement. In this context, practi-tioners need to be able to compare different generators to identify the mostsuited one for their requirements, while researchers seek to identify futureresearch directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empiricalevaluations is not trivial and requires substantial effort to select appropriatebenchmarks, setup the evaluation infrastructure, and collect and analyse theresults. In this Software Note, we present ourJUnit Generation BenchmarkingInfrastructure(JUGE) supporting generators (search-based, random-based,symbolic execution, etc.) seeking to automate the production of unit tests forvarious purposes (validation, regression testing, fault localization, etc.). Theprimary goal is to reduce the overall benchmarking effort, ease the comparisonof several generators, and enhance the knowledge transfer between academiaand industry by standardizing the evaluation and comparison process. Since2013, several editions of a unit testing tool competition, co-located with theSearch-Based Software Testing Workshop, have taken place where JUGE wasused and evolved. As a result, an increasing amount of tools (over 10) fromacademia and industry have been evaluated on JUGE, matured over the years,and allowed the identification of future research directions. Based on the expe-rience gained from the competitions, we discuss the expected impact of JUGEin improving the knowledge transfer on tools and approaches for test genera-tion between academia and industry. Indeed, the JUGE infrastructure demon-strated an implementation design that is flexible enough to enable theintegration of additional unit test generation tools, which is practical for devel-opers and allows researchers to experiment with new and advanced unit testingtools and approaches.de_CH
dc.language.isoende_CH
dc.publisherWileyde_CH
dc.relation.ispartofSoftware Testing, Verification and Reliabilityde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectBenchmarkingde_CH
dc.subjectEvaluation infrastructurede_CH
dc.subjectJUGEde_CH
dc.subjectUnit test generationde_CH
dc.subject.ddc005: Computerprogrammierung, Programme und Datende_CH
dc.titleJUGE : an infrastructure for benchmarking Java unit test generatorsde_CH
dc.typeBeitrag in wissenschaftlicher Zeitschriftde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Informatik (InIT)de_CH
dc.identifier.doi10.1002/stvr.1838de_CH
dc.identifier.doi10.21256/zhaw-26916-
zhaw.funding.euinfo:eu-repo/grantAgreement/EC/H2020/957254//DevOps for Complex Cyber-physical Systems/COSMOSde_CH
zhaw.issue3de_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.starte1838de_CH
zhaw.publication.statusacceptedVersionde_CH
zhaw.volume33de_CH
zhaw.embargo.end2023-12-20de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.webfeedSoftware Engineeringde_CH
zhaw.funding.zhawCOSMOS – DevOps for Complex Cyber-physical Systems of Systemsde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
zhaw.relation.referenceshttps://doi.org/10.5281/zenodo.4904393de_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2022_Devroey-etal_JUGE-Java-unit-test-generator-benchmarking-infrastructure.pdf477.51 kBAdobe PDFThumbnail
View/Open
Show simple item record
Devroey, X., Gambi, A., Galeotti, J. P., Just, R., Kifetew, F., Panichella, A., & Panichella, S. (2022). JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability, 33(3), e1838. https://doi.org/10.1002/stvr.1838
Devroey, X. et al. (2022) ‘JUGE : an infrastructure for benchmarking Java unit test generators’, Software Testing, Verification and Reliability, 33(3), p. e1838. Available at: https://doi.org/10.1002/stvr.1838.
X. Devroey et al., “JUGE : an infrastructure for benchmarking Java unit test generators,” Software Testing, Verification and Reliability, vol. 33, no. 3, p. e1838, 2022, doi: 10.1002/stvr.1838.
DEVROEY, Xavier, Alessio GAMBI, Juan Pablo GALEOTTI, René JUST, Fitsum KIFETEW, Annibale PANICHELLA und Sebastiano PANICHELLA, 2022. JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability. 2022. Bd. 33, Nr. 3, S. e1838. DOI 10.1002/stvr.1838
Devroey, Xavier, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Kifetew, Annibale Panichella, and Sebastiano Panichella. 2022. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability 33 (3): e1838. https://doi.org/10.1002/stvr.1838.
Devroey, Xavier, et al. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability, vol. 33, no. 3, 2022, p. e1838, https://doi.org/10.1002/stvr.1838.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.