Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection

Permanent URI for this collectionhttps://hdl.handle.net/11147/7148

Browse

Search Results

Now showing 1 - 2 of 2
  • Article
    Toward Reliable Annotation in Low-Resource NLP: A Mixture of Agents Framework and Multi-LLM Benchmarking
    (IEEE-Inst Electrical Electronics Engineers Inc, 2025) Onan, Aytug; Nasution, Arbi Haza; Celikten, Tugba
    This paper introduces the Mixture-of-Agents (MoA) framework, a structured approach for improving the reliability of large language model (LLM)-based text annotation in low-resource NLP contexts. MoA employs coordinated agent interactions to enhance agreement, interpretability, and robustness without manual supervision. Evaluations on Turkish classification benchmarks demonstrate that MoA achieves up to 10-point improvements in macro-F1 over single-model baselines and significantly increases inter-agent consistency. Additionally, three novel reliability metrics-Conflict Rate (CR), Ambiguity Resolution Success Rate (ARSR), and Refinement Correction Rate (RCR)-are proposed to quantify annotation stability and correction dynamics. The results indicate that multi-agent coordination can substantially improve label quality, offering a scalable pathway toward trustworthy annotation in low-resource and cross-domain applications. The framework is language-agnostic and adaptable to other low-resource contexts beyond Turkish, including morphologically rich or typologically diverse languages such as Indonesian, Urdu, and Swahili. These findings highlight the scalability of MoA as a generalizable solution for multilingual and cross-domain annotation.
  • Conference Object
    Collabpersona: A Framework for Collaborative Decision Analysis in Persona Driven LLM-Based Multi-Agent Systems
    (IEEE Computer Society, 2025) Tamer, O.A.; Gumus, A.
    Large Language Model (LLM) agents have recently demonstrated impressive capabilities in single agent and adversarial settings, but their ability to collaborate effectively with minimal communication remains uncertain. We introduce CollabPersona, a simulation framework that combines persona-grounded memory with one-shot feedback to study team-based reasoning among LLM agents. In a multi-round variant of the Guess 0.8 of the Average game, agents reason entirely through structured prompts without fine-tuning. Our results show that minimal feedback significantly improves intra-team coordination and stabilizes strategic behavior, while cognitive style remains a primary driver of competitive outcomes. These findings suggest that lightweight scaffolding can elicit emergent collaboration in LLM agents and provide a flexible platform for studying cooperative intelligence. © 2025 IEEE.