Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7148
Browse
3 results
Search Results
Conference Object Semantic Guided Autoregressive Diffusion Based Data Augmentation Using Visual Instructions(Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, AbdurrahmanRecent breakthroughs in generative image models, especially those based on diffusion techniques, have radically transformed the landscape of text-guided image synthesis by delivering exceptional fidelity and detailed semantic control. In this study, we present an iterative editing framework that harnesses the inherent strengths of these generative models to progressively refine images with precision. Our approach begins by generating diverse textual descriptions from an initial image, from which the most effective prompt is selected to drive further refinement through a fine-tuned Stable Diffusion process. This pipeline, as detailed in our flow diagram, orchestrates a series of controlled image modifications that preserve the original context while accommodating deliberate stylistic and semantic adjustments. By cycling the augmented output back into the system, our method achieves a harmonious balance between innovation and consistency, paving the way for highquality, context-aware visual transformations. This dynamic, auto-regressive strategy underscores the transformative potential of modern image generation models for applications that require detailed, controlled creative expression. The code is available on Github. © 2025 Elsevier B.V., All rights reserved.Conference Object Iterative Semantic Refinement: A Vision Language Model-Driven Approach to Auto-Regressive Image Editing(Institute of Electrical and Electronics Engineers Inc., 2025) Yavuzcan, Ege; Kus, Omer; Gumus, AbdurrahmanRecent advancements in Visual Language Models (VLMs) have significantly improved text-to-image generation by enabling more nuanced and semantically rich textual prompts, highlighting the transformative impact of these models on image synthesis. In this work, we leverage these robust capabilities to develop an auto-regressive editing framework that systematically refines images through careful, step-by-step modifications. Our method concisely balances subtle adjustments with meaningful semantic shifts, ensuring that each editing stage preserves the core context while introducing precise variations. By integrating improvements from controllable image editing models, we enhance the precision and stability of our edits and demonstrate the effectiveness of our approach in maintaining visual coherence. This integration results in a powerful strategy for producing diverse, high-quality outputs that align with finely tuned semantic goals. Centered on the strength of VLMs, this framework opens up a new paradigm for image synthesis, offering a blend of creative flexibility and consistent contextual fidelity that holds promise for a variety of applications requiring intricate and controlled visual transformations. © 2025 Elsevier B.V., All rights reserved.Article Citation - WoS: 43Citation - Scopus: 47Semantic Segmentation of Outdoor Panoramic Images(Springer, 2021) Orhan, Semih; Baştanlar, YalınOmnidirectional cameras are capable of providing 360. field-of-view in a single shot. This comprehensive view makes them preferable for many computer vision applications. An omnidirectional view is generally represented as a panoramic image with equirectangular projection, which suffers from distortions. Thus, standard camera approaches should be mathematically modified to be used effectively with panoramic images. In this work, we built a semantic segmentation CNN model that handles distortions in panoramic images using equirectangular convolutions. The proposed model, we call it UNet-equiconv, outperforms an equivalent CNN model with standard convolutions. To the best of our knowledge, ours is the first work on the semantic segmentation of real outdoor panoramic images. Experiment results reveal that using a distortion-aware CNN with equirectangular convolution increases the semantic segmentation performance (4% increase in mIoU). We also released a pixel-level annotated outdoor panoramic image dataset which can be used for various computer vision applications such as autonomous driving and visual localization. Source code of the project and the dataset were made available at the project page (https://github.com/semihorhan/semseg-outdoor-pano). © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
