Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection

Permanent URI for this collectionhttps://hdl.handle.net/11147/7148

Browse

Search Results

Now showing 1 - 2 of 2
  • Conference Object
    Semantic Guided Autoregressive Diffusion Based Data Augmentation Using Visual Instructions
    (Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, Abdurrahman
    Recent breakthroughs in generative image models, especially those based on diffusion techniques, have radically transformed the landscape of text-guided image synthesis by delivering exceptional fidelity and detailed semantic control. In this study, we present an iterative editing framework that harnesses the inherent strengths of these generative models to progressively refine images with precision. Our approach begins by generating diverse textual descriptions from an initial image, from which the most effective prompt is selected to drive further refinement through a fine-tuned Stable Diffusion process. This pipeline, as detailed in our flow diagram, orchestrates a series of controlled image modifications that preserve the original context while accommodating deliberate stylistic and semantic adjustments. By cycling the augmented output back into the system, our method achieves a harmonious balance between innovation and consistency, paving the way for highquality, context-aware visual transformations. This dynamic, auto-regressive strategy underscores the transformative potential of modern image generation models for applications that require detailed, controlled creative expression. The code is available on Github. © 2025 Elsevier B.V., All rights reserved.
  • Conference Object
    Iterative Semantic Refinement: A Vision Language Model-Driven Approach to Auto-Regressive Image Editing
    (Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, Abdurrahman
    Recent advancements in Visual Language Models (VLMs) have significantly improved text-to-image generation by enabling more nuanced and semantically rich textual prompts, highlighting the transformative impact of these models on image synthesis. In this work, we leverage these robust capabilities to develop an auto-regressive editing framework that systematically refines images through careful, step-by-step modifications. Our method concisely balances subtle adjustments with meaningful semantic shifts, ensuring that each editing stage preserves the core context while introducing precise variations. By integrating improvements from controllable image editing models, we enhance the precision and stability of our edits and demonstrate the effectiveness of our approach in maintaining visual coherence. This integration results in a powerful strategy for producing diverse, high-quality outputs that align with finely tuned semantic goals. Centered on the strength of VLMs, this framework opens up a new paradigm for image synthesis, offering a blend of creative flexibility and consistent contextual fidelity that holds promise for a variety of applications requiring intricate and controlled visual transformations. © 2025 Elsevier B.V., All rights reserved.