Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7148
Browse
Search Results
Article Contrastive Retrieval Methodology for Turkish Metaphor Detection and Identification(Assoc Computing Machinery, 2025) Inan, EmrahMetaphorical expressions, as a form of figurative language, are individually limited in their use. However, whenboth literal and non-literal meanings are considered, they are frequently used in web content. Hence, producinga balanced dataset to learn superior representations is a challenging task, and metaphor detection suffers froma limited training dataset. To alleviate this problem, we present a retrieval-based contrastive learning approachwhich first identifies candidate metaphors in the input text and then detects metaphorical expressions as aclaim verification task in the inherently unbalanced setting of this study. Furthermore, we adapt contrastivelearning to make it easier to distinguish between the literal and figurative meanings of the same expression.For the experimental setup, we extract non-literal and literal expressions along with their meanings andsample sentences from a Turkish dictionary. In the metaphor detection subtask, performance evaluation shows that sparse and dense search variations using the Turkish-e5-Large model achieve a Recall@10 (R@10) scoreof 0.614. Moreover, the SimCSE-TR-Contr-Sample-Meaning model achieves the highest Recall@10 (R@10)of 0.9739 on the generated test dataset for the metaphor identification subtask. In the real-world scenario,it achieves a competitive R@10 score of 0.8684, and these results clearly demonstrate that our model cangeneralise to this real-world scenarioArticle Making Hierarchically Aware Decisions on Short Findings for Automatic Summarisation(Elsevier, 2025) Inan, EmrahAn impression in a typical radiology report emphasises critical information by providing a conclusion and reasoning based on the findings. However, the findings and impression sections of these reports generally contain brief texts, as they highlight crucial observations derived from the clinical radiograph. In this scenario, abstractive summarisation models often experience a degradation in performance when generating short impressions. To address this challenge in the summarisation task, our work proposes a method that combines well-known fine-tuned text classification and abstractive summarisation language models. Since fine-tuning a language model requires an extensive, well-defined training dataset and is a time-consuming task dependent on high GPU resources, we employ prompt engineering, which uses prompt templates to programme language models and improve their performance. Our method first predicts whether the given findings text is normal or abnormal by leveraging a fine-tuned language model. Then, we apply a radiology-specific BART model to generate the summary for abnormal findings. In the zero-shot setting, our method achieves remarkable results compared to existing approaches on a real-world dataset. In particular, our method achieves scores of 37.43 for ROUGE-1, 21.72 for ROUGE-2, and 35.52 for ROUGE-L.
