Reframing Software Log Summarisation as Multi-Label Classification With Encoder-Decoder Transformer Model
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
As software systems become more advanced and capable of meeting sophisticated demands, they also become more complex. Consequently, software system logs, which are the most effective tool programmers have for understanding system diagnostics and taking appropriate action, become as complicated as the systems that generate them. To address this issue, software system log summarisation processes the logs generated by complex systems and extracts or summarizes their meaning in a more readable, less complex format. Recent improvements in natural language processing, brought about by transformers that evolved into large language models, offer substantial capabilities that can be implemented for log summarisation tasks. In this study, we explore this capability using a transformer-based model to summarize complex software system logs. The experimental results demonstrate that the fine-tuned T5-Small model improves the average ROUGE-1 and ROUGE-L scores of the BART-Large and Pegasus-Large models by approximately 8.46% and 15.37%, respectively. Thus, the average improvement of the fine-tuned T5-Small over the fine-tuned BART-Large and Pegasus-Large models is approximately 11.92% by means of R1 and RL scores with lesser computational cost. © 2025 IEEE.
Description
Keywords
Log Summarisation, Multi-Label, Natural Language Processing, Text Classification, Transformers
Fields of Science
Citation
WoS Q
Scopus Q

OpenCitations Citation Count
N/A
Volume
Issue
Start Page
End Page
PlumX Metrics
Citations
Scopus : 0
