Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
3 results
Search Results
Conference Object Citation - WoS: 7Citation - Scopus: 6Semantic Pose Verification for Outdoor Visual Localization With Self-Supervised Contrastive Learning(IEEE, 2022) Guerrero, Jose J.; Orhan, Semih; Baştanlar, YalınAny city-scale visual localization system has to overcome long-term appearance changes, such as varying illumination conditions or seasonal changes between query and database images. Since semantic content is more robust to such changes, we exploit semantic information to improve visual localization. In our scenario, the database consists of gnomonic views generated from panoramic images (e.g. Google Street View) and query images are collected with a standard field-of-view camera at a different time. To improve localization, we check the semantic similarity between query and database images, which is not trivial since the position and viewpoint of the cameras do not exactly match. To learn similarity, we propose training a CNN in a self-supervised fashion with contrastive learning on a dataset of semantically segmented images. With experiments we showed that this semantic similarity estimation approach works better than measuring the similarity at pixel-level. Finally, we used the semantic similarity scores to verify the retrievals obtained by a state-of-the-art visual localization method and observed that contrastive learning-based pose verification increases top-1 recall value to 0.90 which corresponds to a 2% improvement.Conference Object Citation - WoS: 5Citation - Scopus: 10Efficient Search in a Panoramic Image Database for Long-Term Visual Localization(IEEE, 2021) Orhan, Semih; Baştanlar, YalınIn this work, we focus on a localization technique that is based on image retrieval. In this technique, database images are kept with GPS coordinates and the geographic location of the retrieved database image serves as an approximate position of the query image. In our scenario, database consists of panoramic images (e.g. Google Street View) and query images are collected with a standard field-of-view camera in a different time. While searching the match of a perspective query image in a panoramic image database, unlike previous studies, we do not generate a number of perspective images from the panoramic image. Instead, taking advantage of CNNs, we slide a search window in the last convolutional layer belonging to the panoramic image and compute the similarity with the descriptor extracted from the query image. In this way, more locations are visited in less amount of time. We conducted experiments with state-of-the-art descriptors and results reveal that the proposed sliding window approach reaches higher accuracy than generating 4 or 8 perspective images.Conference Object Parça Tabanlı Eǧitimin Evrişimli Yapay Sinir Aǧları ile Nesne Konumlandırma Üzerindeki Etkisi(IEEE, 2017) Orhan, Semih; Bastanlar, YalinIn recent years, Convolutional Neural Networks (CNNs) have shown great performance not only in image classification and image recognition tasks but also several tasks of computer vision. A lot of models which have different number of layers and depths, have been proposed. In this work, locations of leopards are tried to be identified by deep neural networks. To accomplish this task, two different methods are applied. First of them is training neural network using with entire images, second of them is training neural networks using with image patches which are cropped from full size of images. Patch training model has shown better performance than full size of image trained model.
