Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
27 results
Search Results
Now showing 1 - 10 of 27
Article Link Prediction for Completing Graphical Software Models Using Neural Networks(IEEE, 2023) Leblebici, Onur; Tuğlular, Tuğkan; Belli, FevziDeficiencies and inconsistencies introduced during the modeling of software systems may result in high costs and negatively impact the quality of all developments performed using these models. Therefore, developing more accurate models will aid software architects in developing software systems that match and exceed expectations. This paper proposes a graph neural network (GNN) method for predicting missing connections, or links, in graphical models, which are widely employed in modeling software systems. The proposed method utilizes graphs as allegedly incomplete, primitive graphical models of the system under consideration (SUC) as input and proposes links between its elements through the following steps: (i) transform the models into graph-structured data and extract features from the nodes, (ii) train the GNN model, and (iii) evaluate the performance of the trained model. Two GNN models based on SEAL and DeepLinker are evaluated using three performance metrics, namely cross-entropy loss, area under curve, and accuracy. Event sequence graphs (ESGs) are used as an example of applying the approach to an event-based behavioral modeling technique. Examining the results of experiments conducted on various datasets and variations of GNN reveals that missing connections between events in an ESG can be predicted even with relatively small datasets generated from ESG models. AuthorArticle Citation - WoS: 3Citation - Scopus: 4Application of the Law of Minimum and Dissimilarity Analysis To Regression Test Case Prioritization(IEEE, 2023) Ufuktepe, Ekincan; Tuğlular, TuğkanRegression testing is one of the most expensive processes in testing. Prioritizing test cases in regression testing is critical for the goal of detecting the faults sooner within a large set of test cases. We propose a test case prioritization (TCP) technique for regression testing called LoM-Score inspired by the Law of Minimum (LoM) from biology. This technique calculates the impact probabilities of methods calculated by change impact analysis with forward slicing and orders test cases according to LoM. However, this ordering doesn't consider the possibility that consecutive test cases may be covering the same methods repeatedly. Thereby, such ordering can delay the time of revealing faults that exist in other methods. To solve this problem, we enhance the LoM-Score TCP technique with an adaptive approach, namely with a dissimilarity-based coordinate analysis approach. The dissimilarity-based coordinate analysis uses Jaccard Similarity for calculating the similarity coefficients between test cases in terms of covered methods and the enhanced technique called Dissimilarity-LoM-Score (Dis-LoM-Score) applies a penalty with respective on the ordered test cases. We performed our case study on 10 open-source Java projects from Defects4J, which is a dataset of real bugs and an infrastructure for controlled experiments provided for software engineering researchers. Then, we hand-seeded multiple mutants generated by Major, which is a mutation testing tool. Then we compared our TCP techniques LoM-Score and Dis-LoM-Score with the four traditional TCP techniques based on their Average Percentage of Faults Detected (APFD) results.Article Citation - WoS: 16Citation - Scopus: 25A Privacy-Preserving Scheme for Smart Grid Using Trusted Execution Environment(IEEE, 2023) Akgün, Mete; Üstündağ Soykan, Elif; Soykan, GürkanThe increasing transformation from the legacy power grid to the smart grid brings new opportunities and challenges to power system operations. Bidirectional communications between home-area devices and the distribution system empower smart grid functionalities. More granular energy consumption data flows through the grid and enables better smart grid applications. This may also lead to privacy violations since the data can be used to infer the consumer's residential behavior, so-called power signature. Energy utilities mostly aggregate the data, especially if the data is shared with stakeholders for the management of market operations. Although this is a privacy-friendly approach, recent works show that this does not fully protect privacy. On the other hand, some applications, like nonintrusive load monitoring, require disaggregated data. Hence, the challenging problem is to find an efficient way to facilitate smart grid operations without sacrificing privacy. In this paper, we propose a privacy-preserving scheme that leverages consumer privacy without reducing accuracy for smart grid applications like load monitoring. In the proposed scheme, we use a trusted execution environment (TEE) to protect the privacy of the data collected from smart appliances (SAs). The scheme allows customer-oriented smart grid applications as the scheme does not use regular aggregation methods but instead uses customer-oriented aggregation to provide privacy. Hence the accuracy loss stemming from disaggregation is prevented. Our scheme protects the transferred consumption data all the way from SAs to Utility so that possible false data injection attacks on the smart meter that aims to deceive the energy request from the grid are also prevented. We conduct security and game-based privacy analysis under the threat model and provide performance analysis of our implementation. Our results demonstrate that the proposed method overperforms other privacy methods in terms of communication and computation cost. The execution time of aggregation for 10,000 customers, each has 20 SAs is approximately 1 second. The decryption operations performed on the TEE have a linear complexity e.g., 172800 operations take around 1 second while 1728000 operations take around 10 seconds. These results can scale up using cloud or hyper-scalers for real-world applications as our scheme performs offline aggregation.Article Citation - WoS: 5Citation - Scopus: 9P/Key: Puf Based Second Factor Authentication(Public Library of Science, 2023) Uysal, Ertan; Akgün, MeteOne-time password (OTP) mechanisms are widely used to strengthen authentication processes. In time-based one-time password (TOTP) mechanisms, the client and server store common secrets. However, once the server is compromised, the client’s secrets are easy to obtain. To solve this issue, hash-chain-based second-factor authentication protocols have been proposed. However, these protocols suffer from latency in the generation of OTPs on the client side because of the hash-chain traversal. Secondly, they can generate only a limited number of OTPs as it depends on the length of the hash-chain. In this paper, we propose a second-factor authentication protocol that utilizes Physically Unclonable Functions (PUFs) to overcome these problems. In the proposed protocol, PUFs are used to store the secrets of the clients securely on the server. In case of server compromise, the attacker cannot obtain the seeds of clients’ secrets and can not generate valid OTPs to impersonate the clients. In the case of physical attacks, including side-channel attacks on the server side, our protocol has a mechanism that prevents attackers from learning the secrets of a client interacting with the server. Furthermore, our protocol does not incur any client-side delay in OTP generation.Article Citation - WoS: 2Citation - Scopus: 6Incremental Testing in Software Product Lines-An Event Based Approach(IEEE, 2023) Beyazıt, Mutlu; Tuğlular, Tuğkan; Öztürk Kaya, DilekOne way of developing fast, effective, and high-quality software products is to reuse previously developed software components and products. In the case of a product family, the software product line (SPL) approach can make reuse more effective. The goal of SPLs is faster development of low-cost and high-quality software products. This paper proposes an incremental model-based approach to test products in SPLs. The proposed approach utilizes event-based behavioral models of the SPL features. It reuses existing event-based feature models and event-based product models along with their test cases to generate test cases for each new product developed by adding a new feature to an existing product. Newly introduced featured event sequence graphs (FESGs) are used for behavioral feature and product modeling; thus, generated test cases are event sequences. The paper presents evaluations with three software product lines to validate the approach and analyze its characteristics by comparing it to the state-of-the-art ESG-based testing approach. Results show that the proposed incremental testing approach highly reuses the existing test sets as intended. Also, it is superior to the state-of-the-art approach in terms of fault detection effectiveness and test generation effort but inferior in terms of test set size and test execution effort.Article Citation - WoS: 3Citation - Scopus: 3A Domain-Specific Language for the Document-Based Model-Driven Engineering of Business Applications(IEEE, 2022) Leblebici, Onur; Kardaş, Geylani; Tuğlular, TuğkanTo facilitate the development of business applications, a domain-specific language (DSL), called DARC, is introduced in this paper. Business documents including the descriptions of the responsibilities, authorizations, and collaborations, are used as the first-class entities during model-driven engineering (MDE) with DARC. Hence the implementation of the business applications can be automatically achieved from the corresponding document models. The evaluation of using DARC DSL for the development of commercial business software was performed in an international sales, logistics, and service solution provider company. The results showed that the code for all business documents and more than 50% of the responsibility descriptions composing the business applications could be generated automatically by modeling with DARC. Finally, according to the users' feedback, the assessment clearly revealed the adoption of DARC features in terms of the DSL quality characteristics, namely functional suitability, usability, reliability, maintainability, productivity, extensibility, compatibility, and expressiveness.Article Soft Error Vulnerability Prediction of Gpgpu Applications(Springer, 2022) Topçu, Burak; Öz, IşılAs graphics processing units (GPUs) evolve to offer high performance for general-purpose computations in addition to inherently fault-tolerant graphics applications, soft error reliability becomes a significant concern. Fault injection provides a method of evaluating the soft error vulnerability of target programs. Since performing fault injection experiments for complex GPU hardware structures takes impractical times, the prediction-based techniques to evaluate the soft error vulnerability of general-purpose GPU (GPGPU) programs based on metrics from different domains get crucial for both HPC developers and GPU vendors. In this work, we propose machine learning (ML)-based prediction frameworks for the soft error vulnerability evaluation of GPGPU programs. We consider program characteristics, hardware usage and performance metrics collected from the simulation and the profiling tools. While we utilize regression models to predict the masked fault rates, we build classification models to specify the vulnerability level of the GPGPU programs based on their silent data corruption (SDC) and crash rates. Our prediction models achieve maximum prediction accuracy rates of 95.9, 88.46, and 85.7% for masked fault rates, SDCs, and crashes, respectivelyArticle Citation - WoS: 1Citation - Scopus: 3Ignoring Internal Utilities in High-Utility Itemset Mining(MDPI, 2022) Oğuz, DamlaHigh-utility itemset mining discovers a set of items that are sold together and have utility values higher than a given minimum utility threshold. The utilities of these itemsets are calculated by considering their internal and external utility values, which correspond, respectively, to the quantity sold of each item in each transaction and profit units. Therefore, internal and external utilities have symmetric effects on deciding whether an itemset is high-utility. The symmetric contributions of both utilities cause two major related challenges. First, itemsets with low external utility values can easily exceed the minimum utility threshold if they are sold extensively. In this case, such itemsets can be found more efficiently using frequent itemset mining. Second, a large number of high-utility itemsets are generated, which can result in interesting or important high-utility itemsets that are overlooked. This study presents an asymmetric approach in which the internal utility values are ignored when finding high-utility itemsets with high external utility values. The experimental results of two real datasets reveal that the external utility values have fundamental effects on the high-utility itemsets. The results of this study also show that this effect tends to increase for high values of the minimum utility threshold. Moreover, the proposed approach reduces the execution time.Article Performance and Accuracy Predictions of Approximation Methods for Shortest-Path Algorithms on Gpus(Elsevier, 2022) Aktılav, Busenur; Öz, IşılApproximate computing techniques, where less-than-perfect solutions are acceptable, present performance-accuracy trade-offs by performing inexact computations. Moreover, heterogeneous architectures, a combination of miscellaneous compute units, offer high performance as well as energy efficiency. Graph algorithms utilize the parallel computation units of heterogeneous GPU architectures as well as performance improvements offered by approximation methods. Since different approximations yield different speedup and accuracy loss for the target execution, it becomes impractical to test all methods with various parameters. In this work, we perform approximate computations for the three shortest-path graph algorithms and propose a machine learning framework to predict the impact of the approximations on program performance and output accuracy. We evaluate random predictions for both synthetic and real road-network graphs, and predictions of the large graph cases from small graph instances. We achieve less than 5% prediction error rates for speedup and inaccuracy values.Article Citation - WoS: 3Citation - Scopus: 4A Novel Efficient Method for Tracking Evolution of Communities in Dynamic Networks(Institute of Electrical and Electronics Engineers Inc., 2022) Karataş, Arzum; Şahin, SerapTracking community evolution can provide insights into significant changes in community interaction patterns, promote the understanding of structural changes, and predict the evolutionary behavior of networks. Therefore, it is a fundamental component of decision-making mechanisms in many fields such as marketing, public health, criminology, etc. However, in this problem domain, it is an open challenge to capture all possible events with high accuracy, memory efficiency, and reasonable execution times under a single solution. To address this gap, we propose a novel method for tracking the evolution of communities (TREC). TREC efficiently detects similar communities through a combination of Locality Sensitive Hashing and Minhashing. We provide experimental evidence on four benchmark datasets and real dynamic datasets such as AS, DBLP, Yelp, and Digg and compare them with the baseline work. The results show that TREC achieves an accuracy of about 98%, has a minimal space requirement, and is very close to the best performing work in terms of time complexity. Moreover, it can track all event types in a single solution.
- «
- 1 (current)
- 2
- 3
- »
