WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7150
Browse
4 results
Search Results
Article Ggnn: Group-Guided Nearest Neighbors for Efficient Image Matching(Springer, 2025) Cine, Ersin; Bastanlar, Yalin; Ozuysal, MustafaThe widely adopted image matching approach remains dependent on exhaustive matching of local features across images. Existing methods aiming to improve efficiency either approximate nearest neighbor (NN) search, compromising accuracy, or apply filtering only after establishing tentative matches, which restricts potential efficiency gains. We challenge the assumption that exhaustive NN search is necessary by proposing a more efficient hierarchical approach that maintains matching accuracy without relying on full-scale NN search. Our key insight is that efficiently identifying sufficiently similar, geometrically meaningful feature matches-rather than the most similar but geometrically random ones-can improve or maintain performance at a lower computational cost. We propose a novel method, Group-Guided Nearest Neighbors (GGNN), which matches groups of features first and then matches individual features only within these matched groups. This hierarchical pipeline reduces the computational complexity of feature matching from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta (n<^>2)$$\end{document} to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta (n \sqrt{n})$$\end{document}, significantly improving efficiency. Experimental results on homography estimation demonstrate that GGNN outperforms standard NN search while achieving performance comparable to state-of-the-art methods. Additionally, we formulate GGNN as a general framework, where conventional NN search is a special case with a single global feature group. This formulation provides a continuum of feature matching methods with varying computational costs, enabling automatic selection based on a given time budget.Article Citation - WoS: 4Citation - Scopus: 4Organolabeler: a Quick and Accurate Annotation Tool for Organoid Images(Amer Chemical Soc, 2024) Kahveci, Burak; Polatli, Elifsu; Bastanlar, Yalin; Guven, SinanOrganoids are self-assembled 3D cellular structures that resemble organs structurally and functionally, providing in vitro platforms for molecular and therapeutic studies. Generation of organoids from human cells often requires long and costly procedures with arguably low efficiency. Prediction and selection of cellular aggregates that result in healthy and functional organoids can be achieved by using artificial intelligence-based tools. Transforming images of 3D cellular constructs into digitally processable data sets for training deep learning models requires labeling of morphological boundaries, which often is performed manually. Here, we report an application named OrganoLabeler, which can create large image-based data sets in a consistent, reliable, fast, and user-friendly manner. OrganoLabeler can create segmented versions of images with combinations of contrast adjusting, K-means clustering, CLAHE, binary, and Otsu thresholding methods. We created embryoid body and brain organoid data sets, of which segmented images were manually created by human researchers and compared with OrganoLabeler. Validation is performed by training U-Net models, which are deep learning models specialized in image segmentation. U-Net models, which are trained with images segmented by OrganoLabeler, achieved similar or better segmentation accuracies than the ones trained with manually labeled reference images. OrganoLabeler can replace manual labeling, providing faster and more accurate results for organoid research free of charge.Conference Object Citation - WoS: 1Konteyner Görüntülerini Kullanarak Hasar Tespiti ve Sınıflandırması(IEEE, 2020) Imamoglu, Zeynep Ekici; Tuglular, Tugkan; Bastanlar, YalinIn the logistics sector, digital transformation is of great importance in terms of competition. In the present case, container warehouse entry / exit operations are carried out manually by the logistics personnel including container damage detection. During container warehouse entry / exit process, the process of detecting damaged containers is carried out by the personnel and several minutes are required to upload to the IT system. The aim of our work is to automate the detection of damaged containers. This way, the mistakes made by the personnel will be eliminated and the process will be accelerated. In this work, we propose to use a convolutional neural network (CNN) that takes the container images and classify them as damaged or undamaged. We modeled the problem as a binary classification and employed different CNN models. The result we obtained shows that there is no single best method for the classification. It is shown how the dataset was created and how the parameters used in the layered structures affect the models employed in this study.Conference Object Parça Tabanlı Eǧitimin Evrişimli Yapay Sinir Aǧları ile Nesne Konumlandırma Üzerindeki Etkisi(IEEE, 2017) Orhan, Semih; Bastanlar, YalinIn recent years, Convolutional Neural Networks (CNNs) have shown great performance not only in image classification and image recognition tasks but also several tasks of computer vision. A lot of models which have different number of layers and depths, have been proposed. In this work, locations of leopards are tried to be identified by deep neural networks. To accomplish this task, two different methods are applied. First of them is training neural network using with entire images, second of them is training neural networks using with image patches which are cropped from full size of images. Patch training model has shown better performance than full size of image trained model.
