Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 4 of 4
  • Article
    Citation - WoS: 3
    Citation - Scopus: 4
    Affordable person detection in omnidirectional cameras using radial integral channel features
    (Springer Verlag, 2019) Demiröz, Barış Evrim; Salah, Albert Ali; Baştanlar, Yalın; Akarun, Lale
    Omnidirectional cameras cover more ground than perspective cameras, at the expense of resolution. Their comprehensive field of view makes omnidirectional cameras appealing for security and ambient intelligence applications. Person detection is usually a core part of such applications. Conventional methods fail for omnidirectional images due to different image geometry and formation. In this study, we propose a method for person detection in omnidirectional images, which is based on the integral channel features approach. Features are extracted from various channels, such as LUV and gradient magnitude, and classified using boosted decision trees. Features are pixel sums inside annular sectors (doughnut slice shapes) contained by the detection window. We also propose a novel data structure called radial integral image that allows to calculate sums inside annular sectors efficiently. We have shown with experiments that our method outperforms the previous state of the art and uses significantly less computational resources.
  • Article
    Citation - WoS: 16
    Citation - Scopus: 16
    Detection and Classification of Vehicles From Omnidirectional Videos Using Multiple Silhouettes
    (Springer Verlag, 2017) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, Yalın
    To detect and classify vehicles in omnidirectional videos, we propose an approach based on the shape (silhouette) of the moving object obtained by background subtraction. Different from other shape-based classification techniques, we exploit the information available in multiple frames of the video. We investigated two different approaches for this purpose. One is combining silhouettes extracted from a sequence of frames to create an average silhouette, the other is making individual decisions for all frames and use consensus of these decisions. Using multiple frames eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types we classify are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity and Hu moments. We applied two separate methods of classification. First one is a flowchart-based method that we developed and the second is K-nearest neighbour classification. 60% of the samples in the dataset are used for training. To ensure randomization in the experiments, threefold cross-validation is applied. The results indicate that using multiple silhouettes increases the classification performance.
  • Article
    Citation - WoS: 28
    Citation - Scopus: 28
    A Direct Approach for Object Detection With Catadioptric Omnidirectional Cameras
    (Springer Verlag, 2016) Çınaroğlu, İbrahim; Baştanlar, Yalın
    In this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.
  • Article
    Citation - WoS: 69
    Citation - Scopus: 89
    Joint Optimization for Object Class Segmentation and Dense Stereo Reconstruction
    (Springer Verlag, 2012) Ladicky, Lubor; Sturgess, Paul; Russell, Chris; Sengupta, Sunando; Baştanlar, Yalın; Clocksin, William; Torr, Philip H.S.
    The problems of dense stereo reconstruction and object class segmentation can both be formulated as Random Field labeling problems, in which every pixel in the image is assigned a label corresponding to either its disparity, or an object class such as road or building. While these two problems are mutually informative, no attempt has been made to jointly optimize their labelings. In this work we provide a flexible framework configured via cross-validation that unifies the two problems and demonstrate that, by resolving ambiguities, which would be present in real world data if the two problems were considered separately, joint optimization of the two problems substantially improves performance. To evaluate our method, we augment the Leu-ven data set (http://cms.brookes.ac.uk/research/visiongroup/ files/Leuven.zip), which is a stereo video shot from a car driving around the streets of Leuven, with 70 hand labeled object class and disparity maps. We hope that the release of these annotations will stimulate further work in the challenging domain of street-view analysis. Complete source code is publicly available (http://cms.brookes.ac.uk/ staff/Philip-Torr/ale.htm). © 2011 Springer Science+Business Media, LLC.