Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-30834
Publication type: | Book part |
Type of review: | Editorial review |
Title: | Vulnerabilities introduced by LLMs through code suggestions |
Authors: | Panichella, Sebastiano |
et. al: | No |
DOI: | 10.1007/978-3-031-54827-7_9 10.21256/zhaw-30834 |
Published in: | Large language models in cybersecurity : threats, exposure and mitigation |
Editors of the parent work: | Kucharavy, Andrei Plancherel, Octave Mulder, Valentin Mermoud, Alain Lenders, Vincent |
Page(s): | 87 |
Pages to: | 97 |
Issue Date: | 2024 |
Publisher / Ed. Institution: | Springer |
Publisher / Ed. Institution: | Cham |
ISBN: | 978-3-031-54826-0 978-3-031-54827-7 |
Language: | English |
Subject (DDC): | 005: Computer programming, programs and data 006: Special computer methods |
Abstract: | Code suggestions from generative language models like ChatGPT contain vulnerabilities as they often rely on older code and programming practices, over-represented in the older code libraries the LLMs rely on for their coding abilities. Advanced attackers can leverage this by injecting code with known but hard-to-detect vulnerabilities in the training datasets. Mitigation can include user education and engineered safeguards such as LLMs trained for vulnerability detection or rule-based checking of codebases. Analysis of LLMs’ code generation capabilities, including formal verification and source training dataset (code-comment pairs) analysis, is necessary for effective vulnerability detection and mitigation. |
URI: | https://digitalcollection.zhaw.ch/handle/11475/30834 |
Fulltext version: | Published version |
License (according to publishing contract): | |
Departement: | School of Engineering |
Organisational Unit: | Institute of Computer Science (InIT) |
Published as part of the ZHAW project: | COSMOS – DevOps for Complex Cyber-physical Systems of Systems |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2024_Panichella_Vulnerabilities-introduced-by-LLM-through-code-suggestions.pdf | 287.29 kB | Adobe PDF | ![]() View/Open |
Show full item record
Panichella, S. (2024). Vulnerabilities introduced by LLMs through code suggestions. In A. Kucharavy, O. Plancherel, V. Mulder, A. Mermoud, & V. Lenders (Eds.), Large language models in cybersecurity : threats, exposure and mitigation (pp. 87–97). Springer. https://doi.org/10.1007/978-3-031-54827-7_9
Panichella, S. (2024) ‘Vulnerabilities introduced by LLMs through code suggestions’, in A. Kucharavy et al. (eds) Large language models in cybersecurity : threats, exposure and mitigation. Cham: Springer, pp. 87–97. Available at: https://doi.org/10.1007/978-3-031-54827-7_9.
S. Panichella, “Vulnerabilities introduced by LLMs through code suggestions,” in Large language models in cybersecurity : threats, exposure and mitigation, A. Kucharavy, O. Plancherel, V. Mulder, A. Mermoud, and V. Lenders, Eds. Cham: Springer, 2024, pp. 87–97. doi: 10.1007/978-3-031-54827-7_9.
PANICHELLA, Sebastiano, 2024. Vulnerabilities introduced by LLMs through code suggestions. In: Andrei KUCHARAVY, Octave PLANCHEREL, Valentin MULDER, Alain MERMOUD und Vincent LENDERS (Hrsg.), Large language models in cybersecurity : threats, exposure and mitigation. Cham: Springer. S. 87–97. ISBN 978-3-031-54826-0
Panichella, Sebastiano. 2024. “Vulnerabilities Introduced by LLMs through Code Suggestions.” In Large Language Models in Cybersecurity : Threats, Exposure and Mitigation, edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, and Vincent Lenders, 87–97. Cham: Springer. https://doi.org/10.1007/978-3-031-54827-7_9.
Panichella, Sebastiano. “Vulnerabilities Introduced by LLMs through Code Suggestions.” Large Language Models in Cybersecurity : Threats, Exposure and Mitigation, edited by Andrei Kucharavy et al., Springer, 2024, pp. 87–97, https://doi.org/10.1007/978-3-031-54827-7_9.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.