Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
9 results
Search Results
Article Citation - WoS: 3Citation - Scopus: 4Affordable person detection in omnidirectional cameras using radial integral channel features(Springer Verlag, 2019) Demiröz, Barış Evrim; Salah, Albert Ali; Baştanlar, Yalın; Akarun, LaleOmnidirectional cameras cover more ground than perspective cameras, at the expense of resolution. Their comprehensive field of view makes omnidirectional cameras appealing for security and ambient intelligence applications. Person detection is usually a core part of such applications. Conventional methods fail for omnidirectional images due to different image geometry and formation. In this study, we propose a method for person detection in omnidirectional images, which is based on the integral channel features approach. Features are extracted from various channels, such as LUV and gradient magnitude, and classified using boosted decision trees. Features are pixel sums inside annular sectors (doughnut slice shapes) contained by the detection window. We also propose a novel data structure called radial integral image that allows to calculate sums inside annular sectors efficiently. We have shown with experiments that our method outperforms the previous state of the art and uses significantly less computational resources.Article Citation - WoS: 3Citation - Scopus: 3Elimination of Useless Images From Raw Camera-Trap Data(Türkiye Klinikleri Journal of Medical Sciences, 2019) Tekeli, Ulaş; Baştanlar, YalınCamera-traps are motion triggered cameras that are used to observe animals in nature. The number of images collected from camera-traps has increased significantly with the widening use of camera-traps thanks to advances in digital technology. A great workload is required for wild-life researchers to group and label these images. We propose a system to decrease the amount of time spent by the researchers by eliminating useless images from raw camera-trap data. These images are too bright, too dark, blurred, or they contain no animals To eliminate bright, dark, and blurred images we employ techniques based on image histograms and fast Fourier transform. To eliminate the images without animals, we propose a system combining convolutional neural networks and background subtraction. We experimentally show that the proposed approach keeps 99% of photos with animals while eliminating more than 50% of photos without animals. We also present a software prototype that employs developed algorithms to eliminate useless images.Article Citation - WoS: 9Citation - Scopus: 13Training Cnns With Image Patches for Object Localisation(Institution of Engineering and Technology, 2018) Orhan, Semih; Baştanlar, YalınRecently, convolutional neural networks (CNNs) have shown great performance in different problems of computer vision including object detection and localisation. A novel training approach is proposed for CNNs to localise some animal species whose bodies have distinctive patterns such as leopards and zebras. To learn characteristic patterns, small patches which are taken from different body parts of animals are used to train models. To find object location, in a test image, all locations are visited in a sliding window fashion. Crops are fed into trained CNN and their classification scores are combined into a heat map. Later on, heat maps are converted to bounding box estimates for varying confidence scores. The localisation performance of the patch-based training approach is compared with Faster R-CNN – a state-of-the-art CNN-based object detection and localisation method. Experimental results reveal that the patch-based training outperforms Faster R-CNN, especially for classes with distinctive patterns.Conference Object Citation - WoS: 4Citation - Scopus: 13Classification and Tracking of Traffic Scene Objects With Hybrid Camera Systems(Institute of Electrical and Electronics Engineers Inc., 2018) Barış, İpek; Baştanlar, YalınIn a hybrid camera system combining an omnidirectional and a Pan-Tilt-Zoom (PTZ) camera, the omnidirectional camera provides 360 degree horizontal field-of-view, whereas the PTZ camera provides high resolution at a certain direction. This results in a wide field-of-view and high resolution camera system. In this paper, we exploit this hybrid system for real-time object classification and tracking for traffic scenes. The omnidirectional camera detects the moving objects and performs an initial classification using shape-based features. Concurrently, the PTZ camera classifies the objects using high resolution frames and Histogram of Oriented Gradients (HOG) features. PTZ camera also performs high-resolution tracking for the objects classified as the target class by the omnidirectional camera. The object types we worked on are pedestrian, motorcycle, car and van. Extensive experiments were conducted to compare the classification accuracy of the hybrid system with single camera alternatives.Article Citation - WoS: 16Citation - Scopus: 16Detection and Classification of Vehicles From Omnidirectional Videos Using Multiple Silhouettes(Springer Verlag, 2017) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, YalınTo detect and classify vehicles in omnidirectional videos, we propose an approach based on the shape (silhouette) of the moving object obtained by background subtraction. Different from other shape-based classification techniques, we exploit the information available in multiple frames of the video. We investigated two different approaches for this purpose. One is combining silhouettes extracted from a sequence of frames to create an average silhouette, the other is making individual decisions for all frames and use consensus of these decisions. Using multiple frames eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types we classify are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity and Hu moments. We applied two separate methods of classification. First one is a flowchart-based method that we developed and the second is K-nearest neighbour classification. 60% of the samples in the dataset are used for training. To ensure randomization in the experiments, threefold cross-validation is applied. The results indicate that using multiple silhouettes increases the classification performance.Article Citation - WoS: 28Citation - Scopus: 28A Direct Approach for Object Detection With Catadioptric Omnidirectional Cameras(Springer Verlag, 2016) Çınaroğlu, İbrahim; Baştanlar, YalınIn this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.Article Citation - WoS: 5Citation - Scopus: 5Instance Detection by Keypoint Matching Beyond the Nearest Neighbor(Springer Verlag, 2016) Uzyıldırım, Furkan Eren; Özuysal, MustafaThe binary descriptors are the representation of choice for real-time keypoint matching. However, they suffer from reduced matching rates due to their discrete nature. We propose an approach that can augment their performance by searching in the top K near neighbor matches instead of just the single nearest neighbor one. To pick the correct match out of the K near neighbors, we exploit statistics of descriptor variations collected for each keypoint in an off-line training phase. This is a similar approach to those that learn a patch specific keypoint representation. Unlike these approaches, we only use a keypoint specific score to rank the list of K near neighbors. Since this list can be efficiently computed with approximate nearest neighbor algorithms, our approach scales well to large descriptor sets.Conference Object Citation - Scopus: 5Detection and Classification of Vehicles From Omnidirectional Videos Using Temporal Average of Silhouettes(INSTICC, 2015) Karaimer, Hakkı Can; Baştanlar, YalınThis paper describes an approach to detect and classify vehicles in omnidirectional videos. The proposed classification method is based on the shape (silhouette) of the detected moving object obtained by background subtraction. Different from other shape based classification techniques, we exploit the information available in multiple frames of the video. The silhouettes extracted from a sequence of frames are combined to create an 'average' silhouette. This approach eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types that we worked on are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity, and Hu moments. The decision boundaries in the feature space are determined using a training set, whereas the performance of the proposed classification is measured with a test set. To ensure randomization, the procedure is repeated with the whole dataset split differently into training and testing samples. The results indicate that the proposed method of using average silhouettes performs better than using the silhouettes in a single frame.Conference Object Citation - WoS: 21Citation - Scopus: 33A Direct Approach for Human Detection With Catadioptric Omnidirectional Cameras(Institute of Electrical and Electronics Engineers Inc., 2014) Çınaroğlu, İbrahim; Baştanlar, YalınThis paper presents an omnidirectional vision based solution to detect human beings. We first go through the conventional sliding window approaches for human detection. Then, we describe how the feature extraction step of the conventional approaches should be modified for a theoretically correct and effective use in omnidirectional cameras. In this way we perform human detection directly on the omnidirectional images without converting them to panoramic or perspective image. Our experiments, both with synthetic and real images show that the proposed approach produces successful results. © 2014 IEEE.
