Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
4 results
Search Results
Article Citation - WoS: 3Citation - Scopus: 4Affordable person detection in omnidirectional cameras using radial integral channel features(Springer Verlag, 2019) Demiröz, Barış Evrim; Salah, Albert Ali; Baştanlar, Yalın; Akarun, LaleOmnidirectional cameras cover more ground than perspective cameras, at the expense of resolution. Their comprehensive field of view makes omnidirectional cameras appealing for security and ambient intelligence applications. Person detection is usually a core part of such applications. Conventional methods fail for omnidirectional images due to different image geometry and formation. In this study, we propose a method for person detection in omnidirectional images, which is based on the integral channel features approach. Features are extracted from various channels, such as LUV and gradient magnitude, and classified using boosted decision trees. Features are pixel sums inside annular sectors (doughnut slice shapes) contained by the detection window. We also propose a novel data structure called radial integral image that allows to calculate sums inside annular sectors efficiently. We have shown with experiments that our method outperforms the previous state of the art and uses significantly less computational resources.Article Citation - WoS: 16Citation - Scopus: 16Detection and Classification of Vehicles From Omnidirectional Videos Using Multiple Silhouettes(Springer Verlag, 2017) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, YalınTo detect and classify vehicles in omnidirectional videos, we propose an approach based on the shape (silhouette) of the moving object obtained by background subtraction. Different from other shape-based classification techniques, we exploit the information available in multiple frames of the video. We investigated two different approaches for this purpose. One is combining silhouettes extracted from a sequence of frames to create an average silhouette, the other is making individual decisions for all frames and use consensus of these decisions. Using multiple frames eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types we classify are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity and Hu moments. We applied two separate methods of classification. First one is a flowchart-based method that we developed and the second is K-nearest neighbour classification. 60% of the samples in the dataset are used for training. To ensure randomization in the experiments, threefold cross-validation is applied. The results indicate that using multiple silhouettes increases the classification performance.Article Citation - WoS: 28Citation - Scopus: 28A Direct Approach for Object Detection With Catadioptric Omnidirectional Cameras(Springer Verlag, 2016) Çınaroğlu, İbrahim; Baştanlar, YalınIn this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.Article Citation - WoS: 5Citation - Scopus: 5Instance Detection by Keypoint Matching Beyond the Nearest Neighbor(Springer Verlag, 2016) Uzyıldırım, Furkan Eren; Özuysal, MustafaThe binary descriptors are the representation of choice for real-time keypoint matching. However, they suffer from reduced matching rates due to their discrete nature. We propose an approach that can augment their performance by searching in the top K near neighbor matches instead of just the single nearest neighbor one. To pick the correct match out of the K near neighbors, we exploit statistics of descriptor variations collected for each keypoint in an off-line training phase. This is a similar approach to those that learn a patch specific keypoint representation. Unlike these approaches, we only use a keypoint specific score to rank the list of K near neighbors. Since this list can be efficiently computed with approximate nearest neighbor algorithms, our approach scales well to large descriptor sets.
