Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 6 of 6
  • Conference Object
    Zamanda ortalaması alınmış ikili önplan imgeleri kullanarak taşıt sınıflandırması
    (IEEE, 2015) Karaimer, Hakkı Can; Baştanlar, Yalın
    We describe a shape-based method for classification of vehicles from omnidirectional videos. Different from similar approaches, the binary images of vehicles obtained by background subtraction in a sequence of frames are averaged over time. We show with experiments that using the average shape of the object results in a more accurate classification than using a single frame. The vehicle types we classify are motorcycle, car and van. We created an omnidirectional video dataset and repeated experiments with shuffled train-test sets to ensure randomization.
  • Correction
    Citation - WoS: 1
    Citation - Scopus: 1
    Correction To: Detection and Classifcation of Vehicles From Omnidirectional Videos Using Multiple Silhouettes
    (Springer, 2018) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, Yalın
    An acknowledgements section was missing in this paper. It should read as follows:.
  • Conference Object
    Citation - WoS: 146
    Citation - Scopus: 216
    The Visual Object Tracking Vot2013 Challenge Results
    (Institute of Electrical and Electronics Engineers Inc., 2013) Kristan, Matej; Pflugfelder, Roman; Leonardis, Ales; Matas, Jiri; Porikli, Fatih; Cehovin, Luka; Nebehay, Georg; Fernandez, Gustavo; Vojir, Tomas; Gatt, Adam; Khajenezhad, Ahmad; Salahledin, Ahmed; Soltani-Farani, Ali; Zarezade, Ali; Petrosino, Alfredo; Milton, Anthony; Bozorgtabar, Behzad; Li, Bo; Chan, Chee Seng; Heng, Cher Keng; Ward, Dale; Kearney, David; Monekosso, Dorothy; Karaimer, Hakkı Can; Rabiee, Hamid R.; Zhu, Jianke; Gao, Jin; Xiao, Jingjing; Zhang, Junge; Xing, Junliang; Huang, Kaiqi; Lebeda, Karel; Cao, Lijun; Maresca, Mario Edoardo; Lim, Mei Kuan; El Helw, Mohamed; Felsberg, Michael; Remagnino, Paolo; Bowden, Richard; Goecke, Roland; Stolkin, Rustam; Lim, Samantha YueYing; Maher, Sara; Poullot, Sebastien; Wong, Sebastien; Satoh, Shin’ichi; Chen, Weihua; Hu, Weiming; Zhang, Xiaoqin; Li, Yang; Zhi Heng, Niu
    Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net).
  • Article
    Citation - WoS: 16
    Citation - Scopus: 16
    Detection and Classification of Vehicles From Omnidirectional Videos Using Multiple Silhouettes
    (Springer Verlag, 2017) Karaimer, Hakkı Can; Barış, İpek; Baştanlar, Yalın
    To detect and classify vehicles in omnidirectional videos, we propose an approach based on the shape (silhouette) of the moving object obtained by background subtraction. Different from other shape-based classification techniques, we exploit the information available in multiple frames of the video. We investigated two different approaches for this purpose. One is combining silhouettes extracted from a sequence of frames to create an average silhouette, the other is making individual decisions for all frames and use consensus of these decisions. Using multiple frames eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types we classify are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity and Hu moments. We applied two separate methods of classification. First one is a flowchart-based method that we developed and the second is K-nearest neighbour classification. 60% of the samples in the dataset are used for training. To ensure randomization in the experiments, threefold cross-validation is applied. The results indicate that using multiple silhouettes increases the classification performance.
  • Conference Object
    Citation - WoS: 16
    Citation - Scopus: 18
    Combining Shape-Based and Gradient-Based Classifiers for Vehicle Classification
    (Institute of Electrical and Electronics Engineers Inc., 2015) Karaimer, Hakkı Can; Çınaroğlu, İbrahim; Baştanlar, Yalın
    In this paper, we present our work on vehicle classification with omnidirectional cameras. In particular, we investigate whether the combined use of shape-based and gradient-based classifiers outperforms the individual classifiers or not. For shape-based classification, we extract features from the silhouettes in the omnidirectional video frames, which are obtained after background subtraction. Classification is performed with kNN (k Nearest Neighbors) method, which has been commonly used in shape-based vehicle classification studies in the past. For gradient-based classification, we employ HOG (Histogram of Oriented Gradients) features. Instead of searching a whole video frame, we extract the features in the region located by the foreground silhouette. We use SVM (Support Vector Machines) as the classifier since HOG+SVM is a commonly used pair in visual object detection. The vehicle types that we worked on are motorcycle, car and van (minibus). In experiments, we first analyze the performances of shape-based and HOG-based classifiers separately. Then, we analyze the performance of the combined classifier where the two classifiers are fused at decision level. Results show that the combined classifier is superior to the individual classifiers. © 2015 IEEE.
  • Conference Object
    Citation - Scopus: 5
    Detection and Classification of Vehicles From Omnidirectional Videos Using Temporal Average of Silhouettes
    (INSTICC, 2015) Karaimer, Hakkı Can; Baştanlar, Yalın
    This paper describes an approach to detect and classify vehicles in omnidirectional videos. The proposed classification method is based on the shape (silhouette) of the detected moving object obtained by background subtraction. Different from other shape based classification techniques, we exploit the information available in multiple frames of the video. The silhouettes extracted from a sequence of frames are combined to create an 'average' silhouette. This approach eliminates most of the wrong decisions which are caused by a poorly extracted silhouette from a single video frame. The vehicle types that we worked on are motorcycle, car (sedan) and van (minibus). The features extracted from the silhouettes are convexity, elongation, rectangularity, and Hu moments. The decision boundaries in the feature space are determined using a training set, whereas the performance of the proposed classification is measured with a test set. To ensure randomization, the procedure is repeated with the whole dataset split differently into training and testing samples. The results indicate that the proposed method of using average silhouettes performs better than using the silhouettes in a single frame.