Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7148
Browse
Search Results
Article Vision-Language Model Approach for Few-Shot Learning of Attention Deficit Hyperactivity Disorder Using EEG Connectivity-Based Featured Images(IOP Publishing Ltd, 2025) Catal, Mehmet Sergen; Gumus, Abdurrahman; Karabiber Cura, Ozlem; Aydin, Ocan; Zubeyir Unlu, MehmetTraditional medical diagnosis approaches have predominantly relied on single-modality analysis, limiting clinicians to interpreting isolated data streams such as images or time series. The integration of vision language models (VLMs) into neurophysiological analysis represents a paradigm shift toward multimodal diagnostic frameworks, enabling clinicians to interact with diagnosis models through diverse modalities including text, audio, visual inputs, etc. This multimodal interaction capability extends beyond conventional label-based classification, offering clinicians flexibility in diagnostic reasoning and decision-making processes. Building on this foundation, this study explores the application of VLMs to electroencephalography (EEG)-based attention deficit hyperactivity disorder (ADHD) classification, addressing a gap in neurophysiological diagnostics. The proposed framework applies VLM-based few-shot ADHD classification by converting raw EEG data into EEG connectivity-based featured images compatible with contrastive language-image pre-training's (CLIP) image encoder. The adaptor-based CLIP approach (Tip-Adapter and Tip-Adapter-F) for few-shot learning improves CLIP's zero-shot classification performance, achieving 78.73% accuracy with 1-shot and 98.30% accuracy with 128-shot using the RN50x16 backbone. Experiments investigate prompt engineering effects, backbone architectures of CLIP, patient-based classification, and combinations of EEG connectivity features. Comparative analysis is performed with two datasets to evaluate the approach between different data sources. Through the adaptation of pre-trained VLMs to neurophysiological data, this technique demonstrates the potential for multimodal diagnostic frameworks that enable flexible clinician-model interactions beyond conventional label-based classification systems. The approach achieves effective ADHD classification with minimal training data while establishing foundations for applying VLMs in clinical neuroscience, where diverse modality interactions through text, visual, and audio inputs can enhance diagnostic workflows. The code is publicly available on GitHub to facilitate further research in the field: https://github.com/miralab-ai/vlm-few-shot-eeg.Conference Object Semantic Guided Autoregressive Diffusion Based Data Augmentation Using Visual Instructions(Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, AbdurrahmanRecent breakthroughs in generative image models, especially those based on diffusion techniques, have radically transformed the landscape of text-guided image synthesis by delivering exceptional fidelity and detailed semantic control. In this study, we present an iterative editing framework that harnesses the inherent strengths of these generative models to progressively refine images with precision. Our approach begins by generating diverse textual descriptions from an initial image, from which the most effective prompt is selected to drive further refinement through a fine-tuned Stable Diffusion process. This pipeline, as detailed in our flow diagram, orchestrates a series of controlled image modifications that preserve the original context while accommodating deliberate stylistic and semantic adjustments. By cycling the augmented output back into the system, our method achieves a harmonious balance between innovation and consistency, paving the way for highquality, context-aware visual transformations. This dynamic, auto-regressive strategy underscores the transformative potential of modern image generation models for applications that require detailed, controlled creative expression. The code is available on Github. © 2025 Elsevier B.V., All rights reserved.Conference Object Iterative Semantic Refinement: A Vision Language Model-Driven Approach to Auto-Regressive Image Editing(Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, AbdurrahmanRecent advancements in Visual Language Models (VLMs) have significantly improved text-to-image generation by enabling more nuanced and semantically rich textual prompts, highlighting the transformative impact of these models on image synthesis. In this work, we leverage these robust capabilities to develop an auto-regressive editing framework that systematically refines images through careful, step-by-step modifications. Our method concisely balances subtle adjustments with meaningful semantic shifts, ensuring that each editing stage preserves the core context while introducing precise variations. By integrating improvements from controllable image editing models, we enhance the precision and stability of our edits and demonstrate the effectiveness of our approach in maintaining visual coherence. This integration results in a powerful strategy for producing diverse, high-quality outputs that align with finely tuned semantic goals. Centered on the strength of VLMs, this framework opens up a new paradigm for image synthesis, offering a blend of creative flexibility and consistent contextual fidelity that holds promise for a variety of applications requiring intricate and controlled visual transformations. © 2025 Elsevier B.V., All rights reserved.Article Citation - WoS: 2Vis-Assist: Computer Vision and Haptic Feedback-Based Wearable Assistive Device for Visually Impaired(Springer, 2025) Dede, Ibrahim; Gumus, AbdurrahmanVisual impairment affects millions of people worldwide, posing significant challenges in their daily lives and personal safety. While assistive technologies, both wearable and non-wearable, can help mitigate these challenges, wearable devices offer the advantage of hands-free operation. In this context, we present Vis-Assist, a novel wearable visual assistive device capable of detecting and classifying objects, measuring their distances, and providing real-time haptic feedback through a vibration motor array, all using an integrated low-cost computational unit without the need for external servers. Our study distinguishes itself by utilizing haptic feedback to convey object information, allowing visually impaired individuals to discern between 19 different object classes following a brief training period. Haptic feedback offers an alternative to audio that doesn't block hearing and can be used alongside it, serving as a complementary solution. The performance of the developed wearable device was evaluated through two types of experiments with four participants. The results demonstrate that users can identify the location of objects and thereby prevent collisions with obstacles. The experiments conducted demonstrate that users, on average, can locate a predefined object, such as a chair, within a 40 m2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {m}<^>{2}$$\end{document} vacant space in under 94 seconds. Furthermore, users exhibit proficiency in finding objects while navigating around obstacles in the same environment, achieving this task in less than 121 seconds on average. The system developed here has high potential to help the self-navigation of visually impaired people and make their daily lives easier. To facilitate further research in this field, the complete source code for this study has been made publicly available on GitHub.
