Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 2 of 2
  • Conference Object
    Citation - Scopus: 1
    Monocular Vision-Based Prediction of Cut-In Manoeuvres With Lstm Networks
    (Springer, 2023) Nalçakan, Yağız; Baştanlar, Yalın
    Advanced driver assistance and automated driving systems should be capable of predicting and avoiding dangerous situations. In this paper, we first discuss the importance of predicting dangerous lane changes and provide its description as a machine learning problem. After summarizing the previous work, we propose a method to predict potentially dangerous lane changes (cut-ins) of the vehicles in front. We follow a computer vision-based approach that only employs a single in-vehicle RGB camera, and we classify the target vehicle’s maneuver based on the recent video frames. Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step. It is computationally efficient compared to other vision-based methods since it exploits a small number of features for the classification step rather than feeding CNNs with RGB frames. We evaluated our approach on a publicly available driving dataset and a lane change detection dataset. We obtained 0.9585 accuracy with the side-aware two-class (cut-in vs. lane-pass) classification model. Experiment results also reveal that our approach outperforms state-of-the-art approaches when used for lane change detection. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
  • Article
    Citation - Scopus: 3
    Cut-In Maneuver Detection With Self-Supervised Contrastive Video Representation Learning
    (Springer, 2023) Nalçakan, Yağız; Baştanlar, Yalın
    The detection of the maneuvers of the surrounding vehicles is important for autonomous vehicles to act accordingly to avoid possible accidents. This study proposes a framework based on contrastive representation learning to detect potentially dangerous cut-in maneuvers that can happen in front of the ego vehicle. First, the encoder network is trained in a self-supervised fashion with contrastive loss where two augmented videos of the same video clip stay close to each other in the embedding space, while augmentations from different videos stay far apart. Since no maneuver labeling is required in this step, a relatively large dataset can be used. After this self-supervised training, the encoder is fine-tuned with our cut-in/lane-pass labeled datasets. Instead of using original video frames, we simplified the scene by highlighting surrounding vehicles and ego-lane. We have investigated the use of several classification heads, augmentation types, and scene simplification alternatives. The most successful model outperforms the best fully supervised model by ∼ 2% with an accuracy of 92.52%