Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-26916
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Devroey, Xavier | - |
dc.contributor.author | Gambi, Alessio | - |
dc.contributor.author | Galeotti, Juan Pablo | - |
dc.contributor.author | Just, René | - |
dc.contributor.author | Kifetew, Fitsum | - |
dc.contributor.author | Panichella, Annibale | - |
dc.contributor.author | Panichella, Sebastiano | - |
dc.date.accessioned | 2023-02-11T10:22:02Z | - |
dc.date.available | 2023-02-11T10:22:02Z | - |
dc.date.issued | 2022 | - |
dc.identifier.issn | 0960-0833 | de_CH |
dc.identifier.issn | 1099-1689 | de_CH |
dc.identifier.uri | https://digitalcollection.zhaw.ch/handle/11475/26916 | - |
dc.description.abstract | Researchers and practitioners have designed and implemented various auto-mated test case generators to support effective software testing. Such genera-tors exist for various languages (e.g., Java, C#, or Python) and variousplatforms (e.g., desktop, web, or mobile applications). The generators exhibitvarying effectiveness and efficiency, depending on the testing goals they aim tosatisfy (e.g., unit-testing of libraries versus system-testing of entire applica-tions) and the underlying techniques they implement. In this context, practi-tioners need to be able to compare different generators to identify the mostsuited one for their requirements, while researchers seek to identify futureresearch directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empiricalevaluations is not trivial and requires substantial effort to select appropriatebenchmarks, setup the evaluation infrastructure, and collect and analyse theresults. In this Software Note, we present ourJUnit Generation BenchmarkingInfrastructure(JUGE) supporting generators (search-based, random-based,symbolic execution, etc.) seeking to automate the production of unit tests forvarious purposes (validation, regression testing, fault localization, etc.). Theprimary goal is to reduce the overall benchmarking effort, ease the comparisonof several generators, and enhance the knowledge transfer between academiaand industry by standardizing the evaluation and comparison process. Since2013, several editions of a unit testing tool competition, co-located with theSearch-Based Software Testing Workshop, have taken place where JUGE wasused and evolved. As a result, an increasing amount of tools (over 10) fromacademia and industry have been evaluated on JUGE, matured over the years,and allowed the identification of future research directions. Based on the expe-rience gained from the competitions, we discuss the expected impact of JUGEin improving the knowledge transfer on tools and approaches for test genera-tion between academia and industry. Indeed, the JUGE infrastructure demon-strated an implementation design that is flexible enough to enable theintegration of additional unit test generation tools, which is practical for devel-opers and allows researchers to experiment with new and advanced unit testingtools and approaches. | de_CH |
dc.language.iso | en | de_CH |
dc.publisher | Wiley | de_CH |
dc.relation.ispartof | Software Testing, Verification and Reliability | de_CH |
dc.rights | Licence according to publishing contract | de_CH |
dc.subject | Benchmarking | de_CH |
dc.subject | Evaluation infrastructure | de_CH |
dc.subject | JUGE | de_CH |
dc.subject | Unit test generation | de_CH |
dc.subject.ddc | 005: Computerprogrammierung, Programme und Daten | de_CH |
dc.title | JUGE : an infrastructure for benchmarking Java unit test generators | de_CH |
dc.type | Beitrag in wissenschaftlicher Zeitschrift | de_CH |
dcterms.type | Text | de_CH |
zhaw.departement | School of Engineering | de_CH |
zhaw.organisationalunit | Institut für Informatik (InIT) | de_CH |
dc.identifier.doi | 10.1002/stvr.1838 | de_CH |
dc.identifier.doi | 10.21256/zhaw-26916 | - |
zhaw.funding.eu | info:eu-repo/grantAgreement/EC/H2020/957254//DevOps for Complex Cyber-physical Systems/COSMOS | de_CH |
zhaw.issue | 3 | de_CH |
zhaw.originated.zhaw | Yes | de_CH |
zhaw.pages.start | e1838 | de_CH |
zhaw.publication.status | acceptedVersion | de_CH |
zhaw.volume | 33 | de_CH |
zhaw.embargo.end | 2023-12-20 | de_CH |
zhaw.publication.review | Peer review (Publikation) | de_CH |
zhaw.webfeed | Software Engineering | de_CH |
zhaw.funding.zhaw | COSMOS – DevOps for Complex Cyber-physical Systems of Systems | de_CH |
zhaw.author.additional | No | de_CH |
zhaw.display.portrait | Yes | de_CH |
zhaw.relation.references | https://doi.org/10.5281/zenodo.4904393 | de_CH |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2022_Devroey-etal_JUGE-Java-unit-test-generator-benchmarking-infrastructure.pdf | 477.51 kB | Adobe PDF | View/Open |
Show simple item record
Devroey, X., Gambi, A., Galeotti, J. P., Just, R., Kifetew, F., Panichella, A., & Panichella, S. (2022). JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability, 33(3), e1838. https://doi.org/10.1002/stvr.1838
Devroey, X. et al. (2022) ‘JUGE : an infrastructure for benchmarking Java unit test generators’, Software Testing, Verification and Reliability, 33(3), p. e1838. Available at: https://doi.org/10.1002/stvr.1838.
X. Devroey et al., “JUGE : an infrastructure for benchmarking Java unit test generators,” Software Testing, Verification and Reliability, vol. 33, no. 3, p. e1838, 2022, doi: 10.1002/stvr.1838.
DEVROEY, Xavier, Alessio GAMBI, Juan Pablo GALEOTTI, René JUST, Fitsum KIFETEW, Annibale PANICHELLA und Sebastiano PANICHELLA, 2022. JUGE : an infrastructure for benchmarking Java unit test generators. Software Testing, Verification and Reliability. 2022. Bd. 33, Nr. 3, S. e1838. DOI 10.1002/stvr.1838
Devroey, Xavier, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Kifetew, Annibale Panichella, and Sebastiano Panichella. 2022. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability 33 (3): e1838. https://doi.org/10.1002/stvr.1838.
Devroey, Xavier, et al. “JUGE : An Infrastructure for Benchmarking Java Unit Test Generators.” Software Testing, Verification and Reliability, vol. 33, no. 3, 2022, p. e1838, https://doi.org/10.1002/stvr.1838.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.