Improvements on a Multi-Task Bert Model

Loading...

Date

Authors

Tekir, Selma

Journal Title

Journal ISSN

Volume Title

Publisher

Open Access Color

Green Open Access

No

OpenAIRE Downloads

OpenAIRE Views

Publicly Funded

No
Impulse
Average
Influence
Average
Popularity
Average

relationships.isProjectOf

relationships.isJournalIssueOf

Abstract

Pre-trained language models have introduced significant performance boosts in natural language processing. Fine-tuning of these models using downstream tasks' supervised data further improves the acquired results. In the fine-tuning process, combining the learning of tasks is an effective approach. This paper proposes a multi-task learning framework based on BERT. To accomplish the tasks of sentiment analysis, paraphrase detection, and semantic text similarity, we include linear layers, a Siamese network with cosine similarity, and convolutional layers to the appropriate places in the architecture. We conducted an ablation study using Stanford Sentiment Treebank (SST), Quora, and SemEval STS datasets for each task to test the framework and its components' effectiveness. The results demonstrate that the proposed multi-task framework improves the performance of BERT. The best results obtained for sentiment analysis, paraphrase detection, and semantic text similarity are accuracies of 0.534 and 0.697 and a Pearson correlation coefficient of 0.345.

Description

Keywords

Multi-Task Learning, Sentiment Analysis, Paraphrase Detection, Semantic Textual Similarity

Fields of Science

Citation

WoS Q

Scopus Q

OpenCitations Logo
OpenCitations Citation Count
N/A

Volume

Issue

Start Page

1

End Page

4
PlumX Metrics
Citations

Scopus : 0

Page Views

78

checked on May 02, 2026

Downloads

2

checked on May 02, 2026

Google Scholar Logo
Google Scholar™
OpenAlex Logo
OpenAlex FWCI
0.0

Sustainable Development Goals

NO POVERTY1
NO POVERTY