Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection

Permanent URI for this collectionhttps://hdl.handle.net/11147/7148

Browse

Search Results

Now showing 1 - 3 of 3
  • Article
    Novel Methods for Depth-Based Calibration of Multiple RGBD Cameras Using Four Mutually Equidistant Spheres
    (IEEE-Inst Electrical Electronics Engineers Inc, 2025) CalI, Esra Tuncer; Gumustekin, Sevket
    This article presents novel calibration methods specifically tailored for multiple depth cameras, utilizing solely depth images. Traditional approaches often rely on infrared (IR) images of checkerboards, which, while feasible, fail to exploit the measured depth values, leading to calibration inaccuracies and 3-D misregistration errors. To overcome this limitation, we designed a 3-D tetrahedron object comprising four spheres placed at each corner. By employing an ellipse-fitting technique, we accurately identified the sphere centers in the depth images. Using these centers, we utilized 3-D reprojection errors and measured depths within a bundle adjustment framework to jointly determine the calibration parameters for four depth cameras. Our proposed methods significantly reduce error values compared with those obtained using IR images of checkerboards. The versatility of our techniques ensures their applicability to various types of depth cameras, independent of their underlying technologies. Here, we demonstrate that by integrating depth information directly into the calibration process, we achieve remarkable improvements. Our first method reduces the average system reconstruction error by 78.98%, while our second method, which introduces a novel cost function tailored to the tetrahedron object, achieves an even more substantial reduction of 82.32%. These results underscore the superiority of our depth-integrated calibration approach, particularly in the context of 3-D reconstruction involving multiple depth cameras.
  • Conference Object
    Serum Creatinine Detection in a Microfluidic Chip Using a Smartphone Camera
    (Chemical and Biological Microsystems Society, 2022) Karakuzu, B.; Tarim, E.A.; Tekin, H.C.
    We present a microfluidic chip platform to detect serum creatinine levels using the enzyme-linked immunosorbent assay (ELISA) principle. In the platform, surface modified microfluidic channel sensitively captured target molecules from the serum sample, and then ELISA protocol was applied inside the channels. Afterward, the blue color formed as a result of the enzymatic reaction was measured via a smartphone camera. The proposed strategy allows the detection of creatinine rapidly in a minute amount of the serum samples without the need for expensive equipment. Thus, chronic kidney disease (CKD) could be monitored easily at point-of-care settings via the proposed creatinine detection strategy. © 2022 MicroTAS 2022 - 26th International Conference on Miniaturized Systems for Chemistry and Life Sciences. All rights reserved.
  • Article
    Citation - WoS: 43
    Citation - Scopus: 47
    Semantic Segmentation of Outdoor Panoramic Images
    (Springer, 2021) Orhan, Semih; Baştanlar, Yalın
    Omnidirectional cameras are capable of providing 360. field-of-view in a single shot. This comprehensive view makes them preferable for many computer vision applications. An omnidirectional view is generally represented as a panoramic image with equirectangular projection, which suffers from distortions. Thus, standard camera approaches should be mathematically modified to be used effectively with panoramic images. In this work, we built a semantic segmentation CNN model that handles distortions in panoramic images using equirectangular convolutions. The proposed model, we call it UNet-equiconv, outperforms an equivalent CNN model with standard convolutions. To the best of our knowledge, ours is the first work on the semantic segmentation of real outdoor panoramic images. Experiment results reveal that using a distortion-aware CNN with equirectangular convolution increases the semantic segmentation performance (4% increase in mIoU). We also released a pixel-level annotated outdoor panoramic image dataset which can be used for various computer vision applications such as autonomous driving and visual localization. Source code of the project and the dataset were made available at the project page (https://github.com/semihorhan/semseg-outdoor-pano). © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.