Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
29 results
Search Results
Now showing 1 - 10 of 29
Article Citation - Scopus: 3Development of Chrono-Spectral Gold Nanoparticle Growth Based Plasmonic Biosensor Platform(Elsevier, 2024) Sözmen, Alper Baran; Elveren, Beste; Erdoğan, Duygu; Mezgil, Bahadır; Baştanlar, Yalın; Yıldız, Ümit Hakan; Arslan Yıldız, AhuPlasmonic sensor platforms are designed for rapid, label-free, and real-time detection and they excel as the next generation biosensors. However, current methods such as Surface Plasmon Resonance require expertise and well-equipped laboratory facilities. Simpler methods such as Localized Surface Plasmon Resonance (LSPR) overcome those limitations, though they lack sensitivity. Hence, sensitivity enhancement plays a crucial role in the future of plasmonic sensor platforms. Herein, a refractive index (RI) sensitivity enhancement methodology is reported utilizing growth of gold nanoparticles (GNPs) on solid support and it is backed up with artificial neural network (ANN) analysis. Sensor platform fabrication was initiated with GNP immobilization onto solid support; immobilized GNPs were then used as seeds for chrono-spectral growth, which was carried out using NH2OH at varied incubation times. The response to RI change of the platform was investigated with varied concentrations of sucrose and ethanol. The detection of bacteria E.coli BL21 was carried out for validation as a model microorganism and results showed that detection was possible at 102 CFU/ml. The data acquired by spectrophotometric measurements were analyzed by ANN and bacteria classification with percentage error rates near 0% was achieved. The proposed LSPR-based, label-free sensor application proved that the developed methodology promises utile sensitivity enhancement potential for similar sensor platforms. © 2024 The Author(s)Conference Object Citation - Scopus: 1Monocular Vision-Based Prediction of Cut-In Manoeuvres With Lstm Networks(Springer, 2023) Nalçakan, Yağız; Baştanlar, YalınAdvanced driver assistance and automated driving systems should be capable of predicting and avoiding dangerous situations. In this paper, we first discuss the importance of predicting dangerous lane changes and provide its description as a machine learning problem. After summarizing the previous work, we propose a method to predict potentially dangerous lane changes (cut-ins) of the vehicles in front. We follow a computer vision-based approach that only employs a single in-vehicle RGB camera, and we classify the target vehicle’s maneuver based on the recent video frames. Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step. It is computationally efficient compared to other vision-based methods since it exploits a small number of features for the classification step rather than feeding CNNs with RGB frames. We evaluated our approach on a publicly available driving dataset and a lane change detection dataset. We obtained 0.9585 accuracy with the side-aware two-class (cut-in vs. lane-pass) classification model. Experiment results also reveal that our approach outperforms state-of-the-art approaches when used for lane change detection. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Book Part Citation - Scopus: 2Dementia Detection With Deep Networks Using Multi-Modal Image Data(CRC Press, 2023) Yiğit, Altuğ; Işık, Zerrin; Baştanlar, YalınNeurodegenerative diseases give rise to irreversible neural damage in the brain. By the time it is diagnosed, the disease may have progressed. Although there is no complete treatment for many types of neurodegenerative diseases, by detecting the disease in its early stages, treatments can be applied to relieve some symptoms or prevent disease progression. Many invasive and non-invasive methods are employed for the diagnosis of dementia. Computer-assisted diagnostic systems make the diagnosis based on volumetric features (structural or functional) or some two-dimensional brain perspectives obtained from a single image modality. This chapter firstly introduces a broad review of multi-modal imaging approaches proposed for dementia diagnosis. Then it presents deep neural networks, which extract structural and functional features from multi-modal imaging data, are employed to diagnose Alzheimer’s and mild cognitive impairments. While MRI scans are safer than most types of scans and provide structural information about the human body, PET scans provide information about functional activities in the brain. Thus, the setup has been designed to make experiments using both MRI and FDG-PET scans. Performances of multi-modal models were compared with single-modal solutions. The multi-modal solution showed superiority over single-modals due to the advantage of focusing on assorted features. © 2023 selection and editorial matter, Jyotismita Chaki; individual chapters, the contributors.Article Citation - Scopus: 3Cut-In Maneuver Detection With Self-Supervised Contrastive Video Representation Learning(Springer, 2023) Nalçakan, Yağız; Baştanlar, YalınThe detection of the maneuvers of the surrounding vehicles is important for autonomous vehicles to act accordingly to avoid possible accidents. This study proposes a framework based on contrastive representation learning to detect potentially dangerous cut-in maneuvers that can happen in front of the ego vehicle. First, the encoder network is trained in a self-supervised fashion with contrastive loss where two augmented videos of the same video clip stay close to each other in the embedding space, while augmentations from different videos stay far apart. Since no maneuver labeling is required in this step, a relatively large dataset can be used. After this self-supervised training, the encoder is fine-tuned with our cut-in/lane-pass labeled datasets. Instead of using original video frames, we simplified the scene by highlighting surrounding vehicles and ego-lane. We have investigated the use of several classification heads, augmentation types, and scene simplification alternatives. The most successful model outperforms the best fully supervised model by ∼ 2% with an accuracy of 92.52%Research Project Trafik sahnelerinde tümyönlü ve Ptz kameralar ile araç tespiti ve sınıflandırması(2016) Baştanlar, YalınÇalışmamızda trafik sahnelerindeki araçların tespit edilip sınıflandırması için bir tümyönlü bir de pan-tilt-zoom (PTZ) kamera içeren hibrit bir kamera sisteminin kullanılması önerilmiştir. Önerilen sistemde, tümyönlü kamera şekil tabanlı öznitelikler ile taşıt sınıflandırması yapabilmekte, eğer varsa hedef sınıf olarak belirlenmiş nesneleri tespit ederek PTZ kameranın o nesnelere yönelmesini sağlayabilmektedir. Bu şekilde, tümyönlü kamera genel tespit, takip ve sınıflandırma işlemine devam ederken istenilen araçlar için PTZ kamera yüksek çözünürlüklü görüntü alabilmektedir. Ayrıca, sınıflandırma başarımını artırmak amacıyla PTZ kamera görüntülerinden çıkarılan gradyan tabanlı öznitelikleri de kullanarak ikinci bir sınıflandırma yapılabilmektedir. Bahsedilen yaklaşımların sınıflandırma başarımları yapılan deneylerle ölçülmüştür. Ayrıca PTZ kamera ile takip modülü gerçekçi senaryolar üzerinde incelenmiştir. Üzerine çalışılan nesne tipleri motosiklet, araba, dolmuş ve yayadır.Research Project Fotokapan fotoğraflarında bazı hayvan türlerinin tespiti(2018) Baştanlar, YalınFotokapanlar dogada vahsi hayvanları gözlemlemek için kurulan hareket sensörlü kameralardır. Gelisen teknolojilerle birlikte fotokapan kullanımı ve dolayısıyla sahadan toplanan imge sayısı belirgin bir sekilde artmıstır. Tüm fotokapan görüntülerinin elden geçirilmesi ve içinde hayvan olup olmadıgının ve hangi hayvan oldugunun tespit edilmesi için gerekli isgücü de orantılı olarak artmaktadır. Çalısmamızda amaç, bu tespitleri otomatik yaparak doga arastırmacılarına gözle kontrol etmeleri gereken çok daha az sayıda fotograf bırakmaktır. Bu amaç dogrultusunda öncelikle asırı parlak, karanlık ve bulanık fotografların elenmesi için etkili teknikler arastırılmıstır. Bu kullanıssız fotografların elenmesinin ardından birinci hedef hayvan içeren fotografların tespitidir. Çalısmamızda, bunun için hem arkaplan çıkarımı ile nesne tespiti (fotokapanlar degismeyen arkaplanı bulunan sahneden degisen zaman aralıkları ile imge topladıgından) hem de evrisimli yapay sinir agları (EYSA) ile nesne bulma teknikleri bir arada kullanılarak hayvan içermeyen imgeleri ayıklayan bir sistem önerilmistir. Bir diger hedef de fotograflarda belirli bir hayvan türünün tespitidir. Bunun için de evrisimli yapay sinir aglarını (EYSA) belirli bir hayvan türünü bulmak için egitmek üzerine arastırma yapılmıs, parça-tabanlı egitime dayalı özgün bir yöntem önerilmistir. Ayrıca, gelistirilen eleme ve hayvan bulma yöntemlerin nasıl bir arayüz ile kullanıcıya aktarılması gerektigi ile ilgili de arastırma yapılmıs, bir yazılım prototipi gelistirilmistir.Conference Object Citation - WoS: 7Citation - Scopus: 6Semantic Pose Verification for Outdoor Visual Localization With Self-Supervised Contrastive Learning(IEEE, 2022) Guerrero, Jose J.; Orhan, Semih; Baştanlar, YalınAny city-scale visual localization system has to overcome long-term appearance changes, such as varying illumination conditions or seasonal changes between query and database images. Since semantic content is more robust to such changes, we exploit semantic information to improve visual localization. In our scenario, the database consists of gnomonic views generated from panoramic images (e.g. Google Street View) and query images are collected with a standard field-of-view camera at a different time. To improve localization, we check the semantic similarity between query and database images, which is not trivial since the position and viewpoint of the cameras do not exactly match. To learn similarity, we propose training a CNN in a self-supervised fashion with contrastive learning on a dataset of semantically segmented images. With experiments we showed that this semantic similarity estimation approach works better than measuring the similarity at pixel-level. Finally, we used the semantic similarity scores to verify the retrievals obtained by a state-of-the-art visual localization method and observed that contrastive learning-based pose verification increases top-1 recall value to 0.90 which corresponds to a 2% improvement.Conference Object Citation - WoS: 5Citation - Scopus: 10Efficient Search in a Panoramic Image Database for Long-Term Visual Localization(IEEE, 2021) Orhan, Semih; Baştanlar, YalınIn this work, we focus on a localization technique that is based on image retrieval. In this technique, database images are kept with GPS coordinates and the geographic location of the retrieved database image serves as an approximate position of the query image. In our scenario, database consists of panoramic images (e.g. Google Street View) and query images are collected with a standard field-of-view camera in a different time. While searching the match of a perspective query image in a panoramic image database, unlike previous studies, we do not generate a number of perspective images from the panoramic image. Instead, taking advantage of CNNs, we slide a search window in the last convolutional layer belonging to the panoramic image and compute the similarity with the descriptor extracted from the query image. In this way, more locations are visited in less amount of time. We conducted experiments with state-of-the-art descriptors and results reveal that the proposed sliding window approach reaches higher accuracy than generating 4 or 8 perspective images.Article Citation - WoS: 8Citation - Scopus: 9Dementia diagnosis by ensemble deep neural networks using FDG-PET scans(Springer, 2022) Yiğit, Altuğ; Baştanlar, Yalın; Işık, ZerrinDementia is a type of brain disease that affects the mental abilities. Various studies utilize PET features or some two-dimensional brain perspectives to diagnose dementia. In this study, we have proposed an ensemble approach, which employs volumetric and axial perspective features for the diagnosis of Alzheimer’s disease and the patients with mild cognitive impairment. We have employed deep learning models and constructed two disparate networks. The first network evaluates volumetric features, and the second network assesses grid-based brain scan features. Decisions of these networks were combined by an adaptive majority voting algorithm to create an ensemble learner. In the evaluations, we compared ensemble networks with single ones as well as feature fusion networks to identify possible improvement; as a result, the ensemble method turned out to be promising for making a diagnostic decision. The proposed ensemble network achieved an average accuracy of 91.83% for the diagnosis of Alzheimer’s disease; to the best of our knowledge, it is the highest diagnosis performance in the literature.Article Citation - WoS: 7Citation - Scopus: 8Long-Term Image-Based Vehicle Localization Improved With Learnt Semantic Descriptors(Elsevier, 2022) Çınaroğlu, İbrahim; Baştanlar, YalınVision based solutions for the localization of vehicles have become popular recently. In this study, we employ an image retrieval based visual localization approach, in which database images are kept with GPS coordinates and the location of the retrieved database image serves as the position estimate of the query image in a city scale driving scenario. Regarding this approach, most existing studies only use descriptors extracted from RGB images and do not exploit semantic content. We show that localization can be improved via descriptors extracted from semantically segmented images, especially when the environment is subjected to severe illumination, seasonal or other long-term changes. We worked on two separate visual localization datasets, one of which (Malaga Streetview Challenge) has been generated by us and made publicly available. Following the extraction of semantic labels in images, we trained a CNN model for localization in a weakly-supervised fashion with triplet ranking loss. The optimized semantic descriptor can be used on its own for localization or preferably it can be used together with a state-of-the-art RGB image based descriptor in hybrid fashion to improve accuracy. Our experiments reveal that the proposed hybrid method is able to increase the localization performance of the standard (RGB image based) approach up to 7.7% regarding Top-1 Recall values.
- «
- 1 (current)
- 2
- 3
- »
