Baştanlar, Yalın
Loading...
Profile URL
Name Variants
Baştanlar, Y.
Bastanlar, Y.
Baştanlar, Y
Bastanlar, Yaln
Bastanlar, Yalin
Bastanlar, Y
Bastanlar, Y.
Baştanlar, Y
Bastanlar, Yaln
Bastanlar, Yalin
Bastanlar, Y
Job Title
Email Address
yalinbastanlar@iyte.edu.tr
Main Affiliation
03.04. Department of Computer Engineering
Status
Current Staff
ORCID ID
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID
Sustainable Development Goals
1NO POVERTY
0
Research Products
2ZERO HUNGER
1
Research Products
3GOOD HEALTH AND WELL-BEING
0
Research Products
4QUALITY EDUCATION
4
Research Products
5GENDER EQUALITY
0
Research Products
6CLEAN WATER AND SANITATION
0
Research Products
7AFFORDABLE AND CLEAN ENERGY
0
Research Products
8DECENT WORK AND ECONOMIC GROWTH
0
Research Products
9INDUSTRY, INNOVATION AND INFRASTRUCTURE
8
Research Products
10REDUCED INEQUALITIES
0
Research Products
11SUSTAINABLE CITIES AND COMMUNITIES
1
Research Products
12RESPONSIBLE CONSUMPTION AND PRODUCTION
0
Research Products
13CLIMATE ACTION
0
Research Products
14LIFE BELOW WATER
0
Research Products
15LIFE ON LAND
0
Research Products
16PEACE, JUSTICE AND STRONG INSTITUTIONS
0
Research Products
17PARTNERSHIPS FOR THE GOALS
0
Research Products

Documents
53
Citations
1048
h-index
15

This researcher does not have a WoS ID.

Scholarly Output
51
Articles
18
Views / Downloads
462453/54926
Supervised MSc Theses
10
Supervised PhD Theses
5
WoS Citation Count
582
Scopus Citation Count
753
Patents
0
Projects
5
WoS Citations per Publication
11.41
Scopus Citations per Publication
14.76
Open Access Source
36
Supervised Theses
15
| Journal | Count |
|---|---|
| Pattern Analysis and Applications | 3 |
| Signal Image and Video Processing | 2 |
| Signal, Image and Video Processing | 2 |
| Biosensors and Bioelectronics: X | 2 |
| 2023 Innovations in Intelligent Systems and Applications Conference, ASYU 2023 | 1 |
Current Page: 1 / 7
Scopus Quartile Distribution
Competency Cloud

51 results
Scholarly Output Search Results
Now showing 1 - 10 of 51
Master Thesis Estimation of Low Sucrose Concentrations and Classification of Bacteria Concentrations With Machine Learning on Spectroscopic Data(Izmir Institute of Technology, 2019) Mezgil, Bahadır; Baştanlar, Yalın; Baştanlar, YalınSpectroscopy can be used to identify elements. In a similar way, there are recent studies that use optical spectroscopy to measure the material concentrations in chemical solutions. In this study, we employ machine learning techniques on collected ultraviolet-visible spectra to estimate the level of sucrose concentrations in solutions and to classify bacteria concentrations. Some metal nanoparticles are very sensitive to refraction index changes in the environment and this helps to detect small refraction index changes in the solution. In our study, gold nanoparticles are used and we benefited from this property to estimate sucrose concentrations. The samples in different low sucrose concentration solutions are obtained by mixing the sucrose measured with precision scales with pure water and then the UV-Vis spectrum of each sample is measured. For the bacteria concentration solutions, spectra for six different bacteria concentrations are captured. Spectra of the same solutions are also captured before adding the bacteria. For each of these solutions, four sets are prepared where gold nanoparticles are not grown (minute 0) and grown for 4 minutes, 10 minutes and 12 minutes. After the dataset preparation, these spectrum measurements are transferred into MATLAB environment as sucrose concentration dataset and bacteria solution dataset. Then the necessary preprocessing steps are performed in order to get the most informative and distinguishing information from these datasets. The raw measurement values and processed spectrum measurements are trained with shallow Artificial Neural Networks (ANN) on MATLAB Deep Learning Toolbox and Support Vector Machine (SVM) on MATLAB Statistics and Machine Learning Toolbox. When the results of the conducted machine learning experiments are examined, success rate is promising for the estimation of sucrose concentrations and very high for classification of bacteria concentrations in pure water solution.Conference Object Citation - Scopus: 1Monocular Vision-Based Prediction of Cut-In Manoeuvres With Lstm Networks(Springer, 2023) Nalçakan, Yağız; Baştanlar, YalınAdvanced driver assistance and automated driving systems should be capable of predicting and avoiding dangerous situations. In this paper, we first discuss the importance of predicting dangerous lane changes and provide its description as a machine learning problem. After summarizing the previous work, we propose a method to predict potentially dangerous lane changes (cut-ins) of the vehicles in front. We follow a computer vision-based approach that only employs a single in-vehicle RGB camera, and we classify the target vehicle’s maneuver based on the recent video frames. Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step. It is computationally efficient compared to other vision-based methods since it exploits a small number of features for the classification step rather than feeding CNNs with RGB frames. We evaluated our approach on a publicly available driving dataset and a lane change detection dataset. We obtained 0.9585 accuracy with the side-aware two-class (cut-in vs. lane-pass) classification model. Experiment results also reveal that our approach outperforms state-of-the-art approaches when used for lane change detection. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Article Citation - WoS: 3Citation - Scopus: 3Catadioptric Hyperspectral Imaging, an Unmixing Approach(Institution of Engineering and Technology, 2020) Özışık Başkurt, Didem; Baştanlar, Yalın; Yardımcı Çetin, YaseminHyperspectral imaging systems provide dense spectral information on the scene under investigation by collecting data from a high number of contiguous bands of the electromagnetic spectrum. The low spatial resolutions of these sensors frequently give rise to the mixing problem in remote sensing applications. Several unmixing approaches are developed in order to handle the challenging mixing problem on perspective images. On the other hand, omnidirectional imaging systems provide a 360-degree field of view in a single image at the expense of lower spatial resolution. In this study, we propose a novel imaging system which integrates hyperspectral cameras with mirrors so on to yield catadioptric omnidirectional imaging systems to benefit from the advantages of both modes. Catadioptric images, incorporating a camera with a reflecting device, introduce radial warping depending on the structure of the mirror used in the system. This warping causes a non-uniformity in the spatial resolution which further complicates the unmixing problem. In this context, a novel spatial-contextual unmixing algorithm specifically for the large field of view of the hyperspectral imaging system is developed. The proposed algorithm is evaluated on various real-world and simulated cases. The experimental results show that the proposed approach outperforms compared methods.Article Citation - WoS: 28Citation - Scopus: 28A Direct Approach for Object Detection With Catadioptric Omnidirectional Cameras(Springer Verlag, 2016) Çınaroğlu, İbrahim; Baştanlar, YalınIn this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.Article Reduced egomotion estimation drift using omnidirectional views(Centre de Visio per Computador, 2014) Baştanlar, YalınEstimation of camera motion from a given image sequence is a common task for multi-view 3D computer vision applications. Salient features (lines, corners etc.) in the images are used to estimate the motion of the camera, also called egomotion. This estimation suffers from an error built-up as the length of the image sequence increases and this causes a drift in the estimated position. In this letter, this phenomenon is demonstrated and an approach to improve the estimation accuracy is proposed. The main idea of the proposed method is using an omnidirectional camera (360° horizontal field of view) in addition to a conventional (perspective) camera. Taking advantage of the correspondences between the omnidirectional and perspective images, the accuracy of camera position estimates can be improved. In our work, we adopt the sequential structure-from-motion approach which starts with estimating the motion between first two views and more views are added one by one. We automatically match points between omnidirectional and perspective views. Point correspondences are used for the estimation of epipolar geometry, followed by the reconstruction of 3D points with iterative linear triangulation. In addition, we calibrate our cameras using sphere camera model which covers both omnidirectional and perspective cameras. This enables us to treat the cameras in the same way at any step of structure-from-motion. We performed simulated and real image experiments to compare the estimation accuracy when only perspective views are used and when an omnidirectional view is added. Results show that the proposed idea of adding omnidirectional views reduces the drift in egomotion estimation.Master Thesis A Direct Approach for Object Detection With Omnidirectional Cameras(Izmir Institute of Technology, 2014) Çınaroğlu, İbrahim; Baştanlar, YalınIn this thesis, an object detection system based on omnidirectional camera which has the advantages of detecting a large view-field is introduced. Initially, the traditional camera approach that uses sliding windows and Histogram of Gradients (HOG) features is adopted. Later on, how the feature extraction step of the conventional approach should be modified is described. The aim is an efficient and mathematically correct use of HOG features in omnidirectional images. Main steps are conversion of gradient orientations to compose an omnidirectional sliding window and modification of gradient magnitudes by means of Riemannian metric. Owing to the proposed methods, object detection process can be performed on the omnidirectional images without converting them to panoramic or perspective image. Experiments that are conducted with both synthetic and real images compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the performance of detection has been improved by using the proposed method.Article Citation - WoS: 16Citation - Scopus: 16Detection and Classification of Vehicles From Omnidirectional Videos Using Multiple Silhouettes(Springer Verlag, 2017) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, YalınTo detect and classify vehicles in omnidirectional videos, we propose an approach based on the shape (silhouette) of the moving object obtained by background subtraction. Different from other shape-based classification techniques, we exploit the information available in multiple frames of the video. We investigated two different approaches for this purpose. One is combining silhouettes extracted from a sequence of frames to create an average silhouette, the other is making individual decisions for all frames and use consensus of these decisions. Using multiple frames eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types we classify are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity and Hu moments. We applied two separate methods of classification. First one is a flowchart-based method that we developed and the second is K-nearest neighbour classification. 60% of the samples in the dataset are used for training. To ensure randomization in the experiments, threefold cross-validation is applied. The results indicate that using multiple silhouettes increases the classification performance.Article Citation - WoS: 3Citation - Scopus: 4Affordable person detection in omnidirectional cameras using radial integral channel features(Springer Verlag, 2019) Demiröz, Barış Evrim; Salah, Albert Ali; Baştanlar, Yalın; Akarun, LaleOmnidirectional cameras cover more ground than perspective cameras, at the expense of resolution. Their comprehensive field of view makes omnidirectional cameras appealing for security and ambient intelligence applications. Person detection is usually a core part of such applications. Conventional methods fail for omnidirectional images due to different image geometry and formation. In this study, we propose a method for person detection in omnidirectional images, which is based on the integral channel features approach. Features are extracted from various channels, such as LUV and gradient magnitude, and classified using boosted decision trees. Features are pixel sums inside annular sectors (doughnut slice shapes) contained by the detection window. We also propose a novel data structure called radial integral image that allows to calculate sums inside annular sectors efficiently. We have shown with experiments that our method outperforms the previous state of the art and uses significantly less computational resources.Article Citation - WoS: 3Citation - Scopus: 3Elimination of Useless Images From Raw Camera-Trap Data(Türkiye Klinikleri Journal of Medical Sciences, 2019) Tekeli, Ulaş; Baştanlar, YalınCamera-traps are motion triggered cameras that are used to observe animals in nature. The number of images collected from camera-traps has increased significantly with the widening use of camera-traps thanks to advances in digital technology. A great workload is required for wild-life researchers to group and label these images. We propose a system to decrease the amount of time spent by the researchers by eliminating useless images from raw camera-trap data. These images are too bright, too dark, blurred, or they contain no animals To eliminate bright, dark, and blurred images we employ techniques based on image histograms and fast Fourier transform. To eliminate the images without animals, we propose a system combining convolutional neural networks and background subtraction. We experimentally show that the proposed approach keeps 99% of photos with animals while eliminating more than 50% of photos without animals. We also present a software prototype that employs developed algorithms to eliminate useless images.Article Citation - Scopus: 3Cut-In Maneuver Detection With Self-Supervised Contrastive Video Representation Learning(Springer, 2023) Nalçakan, Yağız; Baştanlar, YalınThe detection of the maneuvers of the surrounding vehicles is important for autonomous vehicles to act accordingly to avoid possible accidents. This study proposes a framework based on contrastive representation learning to detect potentially dangerous cut-in maneuvers that can happen in front of the ego vehicle. First, the encoder network is trained in a self-supervised fashion with contrastive loss where two augmented videos of the same video clip stay close to each other in the embedding space, while augmentations from different videos stay far apart. Since no maneuver labeling is required in this step, a relatively large dataset can be used. After this self-supervised training, the encoder is fine-tuned with our cut-in/lane-pass labeled datasets. Instead of using original video frames, we simplified the scene by highlighting surrounding vehicles and ego-lane. We have investigated the use of several classification heads, augmentation types, and scene simplification alternatives. The most successful model outperforms the best fully supervised model by ∼ 2% with an accuracy of 92.52%
