Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
8 results
Search Results
Now showing 1 - 8 of 8
Article Citation - Scopus: 3Development of Chrono-Spectral Gold Nanoparticle Growth Based Plasmonic Biosensor Platform(Elsevier, 2024) Sözmen, Alper Baran; Elveren, Beste; Erdoğan, Duygu; Mezgil, Bahadır; Baştanlar, Yalın; Yıldız, Ümit Hakan; Arslan Yıldız, AhuPlasmonic sensor platforms are designed for rapid, label-free, and real-time detection and they excel as the next generation biosensors. However, current methods such as Surface Plasmon Resonance require expertise and well-equipped laboratory facilities. Simpler methods such as Localized Surface Plasmon Resonance (LSPR) overcome those limitations, though they lack sensitivity. Hence, sensitivity enhancement plays a crucial role in the future of plasmonic sensor platforms. Herein, a refractive index (RI) sensitivity enhancement methodology is reported utilizing growth of gold nanoparticles (GNPs) on solid support and it is backed up with artificial neural network (ANN) analysis. Sensor platform fabrication was initiated with GNP immobilization onto solid support; immobilized GNPs were then used as seeds for chrono-spectral growth, which was carried out using NH2OH at varied incubation times. The response to RI change of the platform was investigated with varied concentrations of sucrose and ethanol. The detection of bacteria E.coli BL21 was carried out for validation as a model microorganism and results showed that detection was possible at 102 CFU/ml. The data acquired by spectrophotometric measurements were analyzed by ANN and bacteria classification with percentage error rates near 0% was achieved. The proposed LSPR-based, label-free sensor application proved that the developed methodology promises utile sensitivity enhancement potential for similar sensor platforms. © 2024 The Author(s)Article Citation - Scopus: 3Cut-In Maneuver Detection With Self-Supervised Contrastive Video Representation Learning(Springer, 2023) Nalçakan, Yağız; Baştanlar, YalınThe detection of the maneuvers of the surrounding vehicles is important for autonomous vehicles to act accordingly to avoid possible accidents. This study proposes a framework based on contrastive representation learning to detect potentially dangerous cut-in maneuvers that can happen in front of the ego vehicle. First, the encoder network is trained in a self-supervised fashion with contrastive loss where two augmented videos of the same video clip stay close to each other in the embedding space, while augmentations from different videos stay far apart. Since no maneuver labeling is required in this step, a relatively large dataset can be used. After this self-supervised training, the encoder is fine-tuned with our cut-in/lane-pass labeled datasets. Instead of using original video frames, we simplified the scene by highlighting surrounding vehicles and ego-lane. We have investigated the use of several classification heads, augmentation types, and scene simplification alternatives. The most successful model outperforms the best fully supervised model by ∼ 2% with an accuracy of 92.52%Article Soft Error Vulnerability Prediction of Gpgpu Applications(Springer, 2022) Topçu, Burak; Öz, IşılAs graphics processing units (GPUs) evolve to offer high performance for general-purpose computations in addition to inherently fault-tolerant graphics applications, soft error reliability becomes a significant concern. Fault injection provides a method of evaluating the soft error vulnerability of target programs. Since performing fault injection experiments for complex GPU hardware structures takes impractical times, the prediction-based techniques to evaluate the soft error vulnerability of general-purpose GPU (GPGPU) programs based on metrics from different domains get crucial for both HPC developers and GPU vendors. In this work, we propose machine learning (ML)-based prediction frameworks for the soft error vulnerability evaluation of GPGPU programs. We consider program characteristics, hardware usage and performance metrics collected from the simulation and the profiling tools. While we utilize regression models to predict the masked fault rates, we build classification models to specify the vulnerability level of the GPGPU programs based on their silent data corruption (SDC) and crash rates. Our prediction models achieve maximum prediction accuracy rates of 95.9, 88.46, and 85.7% for masked fault rates, SDCs, and crashes, respectivelyArticle Label-Free Retraining for Improved Ground Plane Segmentation(Springer, 2022) Uzyıldırım, Furkan Eren; Özuysal, MustafaDue to increased potential applications of unmanned aerial vehicles over urban areas, algorithms for the safe landing of these devices have become more critical. One way to ensure a safe landing is to locate the ground plane regions of images captured by the device camera that are free of obstacles by deep semantic segmentation networks. In this paper, we study the performance of semantic segmentation networks trained for this purpose at a particular altitude and location. We show that a variation in altitude and location significantly decreases network performance. We then propose an approach to retrain the network using only a new set of images and without marking the ground regions in this novel training set. Our experiments show that we can convert a network’s operating range from low to high altitudes and vice versa by label-free retraining.Article Performance and Accuracy Predictions of Approximation Methods for Shortest-Path Algorithms on Gpus(Elsevier, 2022) Aktılav, Busenur; Öz, IşılApproximate computing techniques, where less-than-perfect solutions are acceptable, present performance-accuracy trade-offs by performing inexact computations. Moreover, heterogeneous architectures, a combination of miscellaneous compute units, offer high performance as well as energy efficiency. Graph algorithms utilize the parallel computation units of heterogeneous GPU architectures as well as performance improvements offered by approximation methods. Since different approximations yield different speedup and accuracy loss for the target execution, it becomes impractical to test all methods with various parameters. In this work, we perform approximate computations for the three shortest-path graph algorithms and propose a machine learning framework to predict the impact of the approximations on program performance and output accuracy. We evaluate random predictions for both synthetic and real road-network graphs, and predictions of the large graph cases from small graph instances. We achieve less than 5% prediction error rates for speedup and inaccuracy values.Article Citation - WoS: 8Citation - Scopus: 9Dementia diagnosis by ensemble deep neural networks using FDG-PET scans(Springer, 2022) Yiğit, Altuğ; Baştanlar, Yalın; Işık, ZerrinDementia is a type of brain disease that affects the mental abilities. Various studies utilize PET features or some two-dimensional brain perspectives to diagnose dementia. In this study, we have proposed an ensemble approach, which employs volumetric and axial perspective features for the diagnosis of Alzheimer’s disease and the patients with mild cognitive impairment. We have employed deep learning models and constructed two disparate networks. The first network evaluates volumetric features, and the second network assesses grid-based brain scan features. Decisions of these networks were combined by an adaptive majority voting algorithm to create an ensemble learner. In the evaluations, we compared ensemble networks with single ones as well as feature fusion networks to identify possible improvement; as a result, the ensemble method turned out to be promising for making a diagnostic decision. The proposed ensemble network achieved an average accuracy of 91.83% for the diagnosis of Alzheimer’s disease; to the best of our knowledge, it is the highest diagnosis performance in the literature.Article Citation - Scopus: 1A Method for Integrated Business Process Modeling and Ontology Development(Emerald, 2022) Coşkunçay, Ahmet; Demirörs, OnurPurpose: From knowledge management point of view, business process models and ontologies are two essential knowledge artifacts for organizations that consume similar information sources. In this study, the PROMPTUM method for integrated process modeling and ontology development that adheres to well-established practices is presented. The method is intended to guide practitioners who develop both ontologies and business process models in the same or similar domains. Design/methodology/approach: The method is supported by a recently developed toolset, which supports the modeling of relations between the ontologies and the labels within the process model collections. This study introduces the method and its companion toolset. An explanatory study, that includes two case studies, is designed and conducted to reveal and validate the benefits of using the method. Then, a follow-up semi-structured interview identifies the perceived benefits of the method. Findings: Application of the method revealed several benefits including the improvements observed in the consistency and completeness of the process models and ontologies. The method is bringing the best practices in two domains together and guiding the use of labels within process model collections in ontology development and ontology resources in business process modeling. Originality/value: The proposed method with its tool support is a pioneer in enabling to manage the labels and terms within the labels in process model collections consistently with ontology resources. Establishing these relations enables the definition and management of process model elements as resources in domain ontologies. Once the PROMPTUM method is utilized, a related resource is managed as a single resource representing the same real-world object in both artifacts. An explanatory study has shown that improvement in consistency and completeness of process models and ontologies is possible with integrated process modeling and ontology development.Article Citation - Scopus: 1Model-Based Ideal Testing of Hardware Description Language (hdl) Programs(Springer, 2021) Kılınççeker, Onur; Türk, Ercüment; Belli, Fevzi; Challenger, MoharramAn ideal test is supposed to show not only the presence of bugs but also their absence. Based on the Fundamental Test Theory of Goodenough and Gerhart (IEEE Trans Softw Eng SE-1(2):156–173, 1975), this paper proposes an approach to model-based ideal testing of hardware description language (HDL) programs based on their behavioral model. Test sequences are generated from both original (fault-free) and mutant (faulty) models in the sense of positive and negative testing, forming a holistic test view. These test sequences are then executed on original (fault-free) and mutant (faulty) HDL programs, in the sense of mutation testing. Using the techniques known from automata theory, test selection criteria are developed and formally show that they fulfill the major requirements of Fundamental Test Theory, that is, reliability and validity. The current paper comprises a preparation step (consisting of the sub-steps model construction, model mutation, model conversion, and test generation) and a composition step (consisting of the sub-steps pre-selection and construction of Ideal test suites). All the steps are supported by a toolchain that is already implemented and is available online. To critically validate the proposed approach, three case studies (a sequence detector, a traffic light controller, and a RISC-V processor) are used and the strengths and weaknesses of the approach are discussed. The proposed approach achieves the highest mutation score in positive and negative testing for all case studies in comparison with two existing methods (regular expression-based test generation and context-based random test generation), using four different techniques.
