Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 21
  • Article
    Citation - WoS: 3
    Citation - Scopus: 6
    An Exploratory Case Study Using Events as a Software Size Measure
    (Springer, 2023) Hacaloğlu, Tuna; Demirörs, Onur
    Software Size Measurement is a critical task in Software Development Life Cycle (SDLC). It is the primary input for effort estimation models and an important measure for project control and process improvement. There exist various size measurement methods whose successes have already been proven for traditional software architectures and application domains. Being one of them, functional size measurement (FSM) attracts specific attention due to its applicability at the early phases of SDLC. Although FSM methods were successful on the data-base centric, transaction oriented stand-alone applications, in contemporary software development projects, Agile methods are highly used, and a centralized database and a relational approach are not used as before while the requirements suffer from a lack of detail. Today's software is frequently service based, highly distributed, message-driven, scalable and has unprecedented levels of availability. In the new era, event-driven architectures are appearing as one of the emerging approaches where the 'event' concept largely replaces the 'data' concept. Considering the important place of events in contemporary architectures, we focused on approaching the software size measurement problem from the event-driven perspective. This situation guided us to explore how useful event as a size measure in comparison to data-movement based methods. The findings of our study indicates that events can be promising for measurement and should be investigated further in detail to be formalized for creating a measurement model thereby providing a replicable approach.
  • Article
    Citation - Scopus: 3
    Cut-In Maneuver Detection With Self-Supervised Contrastive Video Representation Learning
    (Springer, 2023) Nalçakan, Yağız; Baştanlar, Yalın
    The detection of the maneuvers of the surrounding vehicles is important for autonomous vehicles to act accordingly to avoid possible accidents. This study proposes a framework based on contrastive representation learning to detect potentially dangerous cut-in maneuvers that can happen in front of the ego vehicle. First, the encoder network is trained in a self-supervised fashion with contrastive loss where two augmented videos of the same video clip stay close to each other in the embedding space, while augmentations from different videos stay far apart. Since no maneuver labeling is required in this step, a relatively large dataset can be used. After this self-supervised training, the encoder is fine-tuned with our cut-in/lane-pass labeled datasets. Instead of using original video frames, we simplified the scene by highlighting surrounding vehicles and ego-lane. We have investigated the use of several classification heads, augmentation types, and scene simplification alternatives. The most successful model outperforms the best fully supervised model by ∼ 2% with an accuracy of 92.52%
  • Article
    Soft Error Vulnerability Prediction of Gpgpu Applications
    (Springer, 2022) Topçu, Burak; Öz, Işıl
    As graphics processing units (GPUs) evolve to offer high performance for general-purpose computations in addition to inherently fault-tolerant graphics applications, soft error reliability becomes a significant concern. Fault injection provides a method of evaluating the soft error vulnerability of target programs. Since performing fault injection experiments for complex GPU hardware structures takes impractical times, the prediction-based techniques to evaluate the soft error vulnerability of general-purpose GPU (GPGPU) programs based on metrics from different domains get crucial for both HPC developers and GPU vendors. In this work, we propose machine learning (ML)-based prediction frameworks for the soft error vulnerability evaluation of GPGPU programs. We consider program characteristics, hardware usage and performance metrics collected from the simulation and the profiling tools. While we utilize regression models to predict the masked fault rates, we build classification models to specify the vulnerability level of the GPGPU programs based on their silent data corruption (SDC) and crash rates. Our prediction models achieve maximum prediction accuracy rates of 95.9, 88.46, and 85.7% for masked fault rates, SDCs, and crashes, respectively
  • Article
    Label-Free Retraining for Improved Ground Plane Segmentation
    (Springer, 2022) Uzyıldırım, Furkan Eren; Özuysal, Mustafa
    Due to increased potential applications of unmanned aerial vehicles over urban areas, algorithms for the safe landing of these devices have become more critical. One way to ensure a safe landing is to locate the ground plane regions of images captured by the device camera that are free of obstacles by deep semantic segmentation networks. In this paper, we study the performance of semantic segmentation networks trained for this purpose at a particular altitude and location. We show that a variation in altitude and location significantly decreases network performance. We then propose an approach to retrain the network using only a new set of images and without marking the ground regions in this novel training set. Our experiments show that we can convert a network’s operating range from low to high altitudes and vice versa by label-free retraining.
  • Conference Object
    Citation - WoS: 1
    Citation - Scopus: 1
    Application of Human-Robot Interaction Features To Design and Purchase Processes of Home Robots
    (Springer, 2021) Yapıcı, Nur Beril; Tuğlular, Tuğkan; Başoğlu, Ahmet Nuri
    Production of home robots, such as robotic vacuum cleaners, currently focuses more on the technology and its engineering than the needs of people and their interaction with robots. An observation supporting this view is that the home robots are not customizable. In other words, buyers cannot select the features and built their home robots to order. Stemmed from this observation, the paper proposes an approach that starts with a classification of features of home robots. This classification concerns robot interaction with humans and the environment, a home in our case. Following the classification, the proposed approach utilizes a new hybrid model based on a built-to-order model and dynamic eco-strategy explorer model, enabling designers to develop a production line and buyers to customize their home robots with the classified features. Finally, we applied the proposed approach to robotic vacuum cleaners. We developed a feature model for robotic vacuum cleaners, from which we formed a common uses scenario model.
  • Article
    Citation - WoS: 8
    Citation - Scopus: 9
    Dementia diagnosis by ensemble deep neural networks using FDG-PET scans
    (Springer, 2022) Yiğit, Altuğ; Baştanlar, Yalın; Işık, Zerrin
    Dementia is a type of brain disease that affects the mental abilities. Various studies utilize PET features or some two-dimensional brain perspectives to diagnose dementia. In this study, we have proposed an ensemble approach, which employs volumetric and axial perspective features for the diagnosis of Alzheimer’s disease and the patients with mild cognitive impairment. We have employed deep learning models and constructed two disparate networks. The first network evaluates volumetric features, and the second network assesses grid-based brain scan features. Decisions of these networks were combined by an adaptive majority voting algorithm to create an ensemble learner. In the evaluations, we compared ensemble networks with single ones as well as feature fusion networks to identify possible improvement; as a result, the ensemble method turned out to be promising for making a diagnostic decision. The proposed ensemble network achieved an average accuracy of 91.83% for the diagnosis of Alzheimer’s disease; to the best of our knowledge, it is the highest diagnosis performance in the literature.
  • Article
    Citation - Scopus: 1
    Model-Based Ideal Testing of Hardware Description Language (hdl) Programs
    (Springer, 2021) Kılınççeker, Onur; Türk, Ercüment; Belli, Fevzi; Challenger, Moharram
    An ideal test is supposed to show not only the presence of bugs but also their absence. Based on the Fundamental Test Theory of Goodenough and Gerhart (IEEE Trans Softw Eng SE-1(2):156–173, 1975), this paper proposes an approach to model-based ideal testing of hardware description language (HDL) programs based on their behavioral model. Test sequences are generated from both original (fault-free) and mutant (faulty) models in the sense of positive and negative testing, forming a holistic test view. These test sequences are then executed on original (fault-free) and mutant (faulty) HDL programs, in the sense of mutation testing. Using the techniques known from automata theory, test selection criteria are developed and formally show that they fulfill the major requirements of Fundamental Test Theory, that is, reliability and validity. The current paper comprises a preparation step (consisting of the sub-steps model construction, model mutation, model conversion, and test generation) and a composition step (consisting of the sub-steps pre-selection and construction of Ideal test suites). All the steps are supported by a toolchain that is already implemented and is available online. To critically validate the proposed approach, three case studies (a sequence detector, a traffic light controller, and a RISC-V processor) are used and the strengths and weaknesses of the approach are discussed. The proposed approach achieves the highest mutation score in positive and negative testing for all case studies in comparison with two existing methods (regular expression-based test generation and context-based random test generation), using four different techniques.
  • Article
    Citation - WoS: 15
    Citation - Scopus: 18
    Achieving Query Performance in the Cloud Via a Cost-Effective Data Replication Strategy
    (Springer, 2021) Tos, Uras; Mokadem, Riad; Hameurlain, Abdelkader; Ayav, Tolga
    Meeting performance expectations of tenants without sacrificing economic benefit is a tough challenge for cloud providers. We propose a data replication strategy to simultaneously satisfy both the performance and provider profit. Response time of database queries is estimated with the consideration of parallel execution. If the estimated response time is not acceptable, bottlenecks are identified in the query plan. Data replication is realized to resolve the bottlenecks. Data placement is heuristically performed in a way to satisfy query response times at a minimal cost for the provider. We demonstrate the validity of our strategy in a performance evaluation study.
  • Article
    Citation - WoS: 9
    Citation - Scopus: 11
    Test Input Generation From Cause-Effect Graphs
    (Springer, 2021) Kavzak Ufuktepe, Deniz; Ayav, Tolga; Belli, Fevzi
    Cause-effect graphing is a well-known requirement-based and systematic testing method with a heuristic approach. Since it was introduced by Myers in 1979, there have not been any sufficiently comprehensive studies to generate test inputs from these graphs. However, there exist several methods for test input generation from Boolean expressions. Cause-effect graphs can be more convenient for a wide variety of users compared to Boolean expressions. Moreover, they can be used to enforce common constraints and rules on the system variables of different expressions of the system. This study proposes a new mutant-based test input generation method, Spectral Testing for Boolean specification models based on spectral analysis of Boolean expressions using mutations of the original expression. Unlike Myers' method, Spectral Testing is an algorithmic and deterministic method, in which we model the possible faults systematically. Furthermore, the conversion of cause-effect graphs between Boolean expressions is explored so that the existing test input generation methods for Boolean expressions can be exploited for cause-effect graphing. A software is developed as an open-source extendable tool for generating test inputs from cause-effect graphs by using different methods and performing mutation analysis for quantitative evaluation on these methods for further analysis and comparison. Selected methods, MI, MAX-A, MUTP, MNFP, CUTPNFP, MUMCUT, Unique MC/DC, and Masking MC/DC are implemented together with Myers' technique and the proposed Spectral Testing in the developed tool. For mutation testing, 9 common fault types of Boolean expressions are modeled, implemented, and generated in the tool. An XML-based standard on top of GraphML representing a cause-effect graph is proposed and is used as the input type to the approach. An empirical study is performed by a case study on 5 different systems with various requirements, including the benchmark set from the TCAS-II system. Our results show that the proposed XML-based cause-effect graph model can be used to represent system requirements. The developed tool can be used for test input generation from proposed cause-effect graph models and can perform mutation analysis to distinguish between the methods with respect to the effectiveness of test inputs and their mutant kill scores. The proposed Spectral Testing method outperforms the state-of-the-art methods in the context of critical systems, regarding both the effectiveness and mutant kill scores of the generated test inputs, and increasing the chances of revealing faults in the system and reducing the cost of testing. Moreover, the proposed method can be used as a separate or complementary method to other well-performing test input generation methods for covering specific fault types.
  • Article
    Citation - WoS: 4
    Citation - Scopus: 4
    Predicting the Soft Error Vulnerability of Parallel Applications Using Machine Learning
    (Springer, 2021) Öz, Işıl; Arslan, Sanem
    With the widespread use of the multicore systems having smaller transistor sizes, soft errors become an important issue for parallel program execution. Fault injection is a prevalent method to quantify the soft error rates of the applications. However, it is very time consuming to perform detailed fault injection experiments. Therefore, prediction-based techniques have been proposed to evaluate the soft error vulnerability in a faster way. In this work, we present a soft error vulnerability prediction approach for parallel applications using machine learning algorithms. We define a set of features including thread communication, data sharing, parallel programming, and performance characteristics; and train our models based on three ML algorithms. This study uses the parallel programming features, as well as the combination of all features for the first time in vulnerability prediction of parallel programs. We propose two models for the soft error vulnerability prediction: (1) A regression model with rigorous feature selection analysis that estimates correct execution rates, (2) A novel classification model that predicts the vulnerability level of the target programs. We get maximum prediction accuracy rate of 73.2% for the regression-based model, and achieve 89% F-score for our classification model.