You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.
The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.
For the reports in this repository we specifically note that
- the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
- the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
- technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
- in other cases, please contact the copyright owner for detailed information
By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.
If you are in doubt, feel free to contact webmaster@ide.mdh.se
Benchmarking Large Language Models for Autonomous Run-time Error Repair: Toward Self-Healing Software Systems
Publication Type:
Conference/Workshop Paper
Venue:
International Conference on Evaluation and Assessment in Software Engineering
Abstract
As software systems grow in complexity and become integral to
daily operations, traditional approaches to software testing, mainte-
nance, and evolution are increasingly inadequate. Recent advances
in artificial intelligence, particularly in large language models, offer
promising avenues for achieving self-healing software—software
capable of autonomously detecting, diagnosing, and repairing faults
without human intervention. However, while much of the existing
literature focuses on on code repair of vulnerabilities or repository-
level bugs, the application of large language models for autonomously
repairing run-time errors—which require dynamic analysis and ex-
ecution context awareness—remains largely uncharted.
In this study, we empirically benchmark ten distinct large lan-
guage models—ChatGPT-4o, ChatGPT-4o-mini, Claude 3.5 Sonnet,
Claude 3.5 Haiku, Gemini 1.5 Flash, Llama 3.2, Mistral Nemo, Grok
Beta, Command R+, and Jamba 1.5 Large—to assess their ability to
repair run-time errors in code. We conducted our evaluation on a
dataset of 76 programming problems manually sourced from Leet-
code, implemented in C++ (48 problems) and Java (28 problems).
Each model was provided with a single opportunity to generate a
corrected solution, which was then evaluated based on its ability
to pass all associated test cases.
Our experimental results provide early empirical evidence of
the potential of large language models to drive a paradigm shift
in artificial intelligence-driven software engineering. The findings
reveal that while certain large language models demonstrate strong
code-fixing capabilities, others struggle, highlighting significant
performance disparities across models. This work not only fills
a critical gap in empirical software engineering, but also opens
avenues for refining artificial intelligence-driven software engi-
neering, particularly for self-healing software
Bibtex
@inproceedings{Bucaioni7185,
author = {Alessio Bucaioni and Gabriele Gualandi and Johan Toma},
title = {Benchmarking Large Language Models for Autonomous Run-time Error Repair: Toward Self-Healing Software Systems},
month = {June},
year = {2025},
booktitle = {International Conference on Evaluation and Assessment in Software Engineering},
url = {http://www.ipr.mdu.se/publications/7185-}
}