Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 20
  • Article
    Link Prediction for Completing Graphical Software Models Using Neural Networks
    (IEEE, 2023) Leblebici, Onur; Tuğlular, Tuğkan; Belli, Fevzi
    Deficiencies and inconsistencies introduced during the modeling of software systems may result in high costs and negatively impact the quality of all developments performed using these models. Therefore, developing more accurate models will aid software architects in developing software systems that match and exceed expectations. This paper proposes a graph neural network (GNN) method for predicting missing connections, or links, in graphical models, which are widely employed in modeling software systems. The proposed method utilizes graphs as allegedly incomplete, primitive graphical models of the system under consideration (SUC) as input and proposes links between its elements through the following steps: (i) transform the models into graph-structured data and extract features from the nodes, (ii) train the GNN model, and (iii) evaluate the performance of the trained model. Two GNN models based on SEAL and DeepLinker are evaluated using three performance metrics, namely cross-entropy loss, area under curve, and accuracy. Event sequence graphs (ESGs) are used as an example of applying the approach to an event-based behavioral modeling technique. Examining the results of experiments conducted on various datasets and variations of GNN reveals that missing connections between events in an ESG can be predicted even with relatively small datasets generated from ESG models. Author
  • Article
    Citation - WoS: 3
    Citation - Scopus: 4
    Application of the Law of Minimum and Dissimilarity Analysis To Regression Test Case Prioritization
    (IEEE, 2023) Ufuktepe, Ekincan; Tuğlular, Tuğkan
    Regression testing is one of the most expensive processes in testing. Prioritizing test cases in regression testing is critical for the goal of detecting the faults sooner within a large set of test cases. We propose a test case prioritization (TCP) technique for regression testing called LoM-Score inspired by the Law of Minimum (LoM) from biology. This technique calculates the impact probabilities of methods calculated by change impact analysis with forward slicing and orders test cases according to LoM. However, this ordering doesn't consider the possibility that consecutive test cases may be covering the same methods repeatedly. Thereby, such ordering can delay the time of revealing faults that exist in other methods. To solve this problem, we enhance the LoM-Score TCP technique with an adaptive approach, namely with a dissimilarity-based coordinate analysis approach. The dissimilarity-based coordinate analysis uses Jaccard Similarity for calculating the similarity coefficients between test cases in terms of covered methods and the enhanced technique called Dissimilarity-LoM-Score (Dis-LoM-Score) applies a penalty with respective on the ordered test cases. We performed our case study on 10 open-source Java projects from Defects4J, which is a dataset of real bugs and an infrastructure for controlled experiments provided for software engineering researchers. Then, we hand-seeded multiple mutants generated by Major, which is a mutation testing tool. Then we compared our TCP techniques LoM-Score and Dis-LoM-Score with the four traditional TCP techniques based on their Average Percentage of Faults Detected (APFD) results.
  • Article
    Citation - WoS: 16
    Citation - Scopus: 25
    A Privacy-Preserving Scheme for Smart Grid Using Trusted Execution Environment
    (IEEE, 2023) Akgün, Mete; Üstündağ Soykan, Elif; Soykan, Gürkan
    The increasing transformation from the legacy power grid to the smart grid brings new opportunities and challenges to power system operations. Bidirectional communications between home-area devices and the distribution system empower smart grid functionalities. More granular energy consumption data flows through the grid and enables better smart grid applications. This may also lead to privacy violations since the data can be used to infer the consumer's residential behavior, so-called power signature. Energy utilities mostly aggregate the data, especially if the data is shared with stakeholders for the management of market operations. Although this is a privacy-friendly approach, recent works show that this does not fully protect privacy. On the other hand, some applications, like nonintrusive load monitoring, require disaggregated data. Hence, the challenging problem is to find an efficient way to facilitate smart grid operations without sacrificing privacy. In this paper, we propose a privacy-preserving scheme that leverages consumer privacy without reducing accuracy for smart grid applications like load monitoring. In the proposed scheme, we use a trusted execution environment (TEE) to protect the privacy of the data collected from smart appliances (SAs). The scheme allows customer-oriented smart grid applications as the scheme does not use regular aggregation methods but instead uses customer-oriented aggregation to provide privacy. Hence the accuracy loss stemming from disaggregation is prevented. Our scheme protects the transferred consumption data all the way from SAs to Utility so that possible false data injection attacks on the smart meter that aims to deceive the energy request from the grid are also prevented. We conduct security and game-based privacy analysis under the threat model and provide performance analysis of our implementation. Our results demonstrate that the proposed method overperforms other privacy methods in terms of communication and computation cost. The execution time of aggregation for 10,000 customers, each has 20 SAs is approximately 1 second. The decryption operations performed on the TEE have a linear complexity e.g., 172800 operations take around 1 second while 1728000 operations take around 10 seconds. These results can scale up using cloud or hyper-scalers for real-world applications as our scheme performs offline aggregation.
  • Article
    Citation - WoS: 5
    Citation - Scopus: 9
    P/Key: Puf Based Second Factor Authentication
    (Public Library of Science, 2023) Uysal, Ertan; Akgün, Mete
    One-time password (OTP) mechanisms are widely used to strengthen authentication processes. In time-based one-time password (TOTP) mechanisms, the client and server store common secrets. However, once the server is compromised, the client’s secrets are easy to obtain. To solve this issue, hash-chain-based second-factor authentication protocols have been proposed. However, these protocols suffer from latency in the generation of OTPs on the client side because of the hash-chain traversal. Secondly, they can generate only a limited number of OTPs as it depends on the length of the hash-chain. In this paper, we propose a second-factor authentication protocol that utilizes Physically Unclonable Functions (PUFs) to overcome these problems. In the proposed protocol, PUFs are used to store the secrets of the clients securely on the server. In case of server compromise, the attacker cannot obtain the seeds of clients’ secrets and can not generate valid OTPs to impersonate the clients. In the case of physical attacks, including side-channel attacks on the server side, our protocol has a mechanism that prevents attackers from learning the secrets of a client interacting with the server. Furthermore, our protocol does not incur any client-side delay in OTP generation.
  • Article
    Citation - WoS: 2
    Citation - Scopus: 6
    Incremental Testing in Software Product Lines-An Event Based Approach
    (IEEE, 2023) Beyazıt, Mutlu; Tuğlular, Tuğkan; Öztürk Kaya, Dilek
    One way of developing fast, effective, and high-quality software products is to reuse previously developed software components and products. In the case of a product family, the software product line (SPL) approach can make reuse more effective. The goal of SPLs is faster development of low-cost and high-quality software products. This paper proposes an incremental model-based approach to test products in SPLs. The proposed approach utilizes event-based behavioral models of the SPL features. It reuses existing event-based feature models and event-based product models along with their test cases to generate test cases for each new product developed by adding a new feature to an existing product. Newly introduced featured event sequence graphs (FESGs) are used for behavioral feature and product modeling; thus, generated test cases are event sequences. The paper presents evaluations with three software product lines to validate the approach and analyze its characteristics by comparing it to the state-of-the-art ESG-based testing approach. Results show that the proposed incremental testing approach highly reuses the existing test sets as intended. Also, it is superior to the state-of-the-art approach in terms of fault detection effectiveness and test generation effort but inferior in terms of test set size and test execution effort.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    A Domain-Specific Language for the Document-Based Model-Driven Engineering of Business Applications
    (IEEE, 2022) Leblebici, Onur; Kardaş, Geylani; Tuğlular, Tuğkan
    To facilitate the development of business applications, a domain-specific language (DSL), called DARC, is introduced in this paper. Business documents including the descriptions of the responsibilities, authorizations, and collaborations, are used as the first-class entities during model-driven engineering (MDE) with DARC. Hence the implementation of the business applications can be automatically achieved from the corresponding document models. The evaluation of using DARC DSL for the development of commercial business software was performed in an international sales, logistics, and service solution provider company. The results showed that the code for all business documents and more than 50% of the responsibility descriptions composing the business applications could be generated automatically by modeling with DARC. Finally, according to the users' feedback, the assessment clearly revealed the adoption of DARC features in terms of the DSL quality characteristics, namely functional suitability, usability, reliability, maintainability, productivity, extensibility, compatibility, and expressiveness.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 3
    Ignoring Internal Utilities in High-Utility Itemset Mining
    (MDPI, 2022) Oğuz, Damla
    High-utility itemset mining discovers a set of items that are sold together and have utility values higher than a given minimum utility threshold. The utilities of these itemsets are calculated by considering their internal and external utility values, which correspond, respectively, to the quantity sold of each item in each transaction and profit units. Therefore, internal and external utilities have symmetric effects on deciding whether an itemset is high-utility. The symmetric contributions of both utilities cause two major related challenges. First, itemsets with low external utility values can easily exceed the minimum utility threshold if they are sold extensively. In this case, such itemsets can be found more efficiently using frequent itemset mining. Second, a large number of high-utility itemsets are generated, which can result in interesting or important high-utility itemsets that are overlooked. This study presents an asymmetric approach in which the internal utility values are ignored when finding high-utility itemsets with high external utility values. The experimental results of two real datasets reveal that the external utility values have fundamental effects on the high-utility itemsets. The results of this study also show that this effect tends to increase for high values of the minimum utility threshold. Moreover, the proposed approach reduces the execution time.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 4
    A Novel Efficient Method for Tracking Evolution of Communities in Dynamic Networks
    (Institute of Electrical and Electronics Engineers Inc., 2022) Karataş, Arzum; Şahin, Serap
    Tracking community evolution can provide insights into significant changes in community interaction patterns, promote the understanding of structural changes, and predict the evolutionary behavior of networks. Therefore, it is a fundamental component of decision-making mechanisms in many fields such as marketing, public health, criminology, etc. However, in this problem domain, it is an open challenge to capture all possible events with high accuracy, memory efficiency, and reasonable execution times under a single solution. To address this gap, we propose a novel method for tracking the evolution of communities (TREC). TREC efficiently detects similar communities through a combination of Locality Sensitive Hashing and Minhashing. We provide experimental evidence on four benchmark datasets and real dynamic datasets such as AS, DBLP, Yelp, and Digg and compare them with the baseline work. The results show that TREC achieves an accuracy of about 98%, has a minimal space requirement, and is very close to the best performing work in terms of time complexity. Moreover, it can track all event types in a single solution.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    Hybrid Probabilistic Timing Analysis With Extreme Value Theory and Copulas
    (Elsevier, 2022) Bekdemir, Levent; Bazlamaçcı, Cüneyt F.
    The primary challenge of time-critical systems is to guarantee that a task completes its execution before its deadline. In order to ensure compliance with timing requirements, it is necessary to analyze the timing behavior of the overall software. Worst-Case Execution Time (WCET) represents the maximum amount of time an individual software unit takes to execute and is used for scheduling analysis in safety-critical systems. Recent studies focus on statistical approaches, which augments measurement-based timing analysis with probabilistic confidence level by applying stochastic methods. Common approaches either utilize Extreme Value Theory (EVT) for end-to-end measurements or convolution techniques for a group of program units to derive probabilistic upper bounds for the program. The former method does not ensure path coverage while the latter suffers from ignoring possible extreme cases. Furthermore, current state-of-the-art convolution methods employed in a commercial WCET analysis tool overestimates the results because of using the assumption of worst-case dependence between basic blocks. In this paper, we propose a hybrid probabilistic timing analysis framework and modeling the program units with EVT to capture extreme cases and use Copulas to model the dependency between the units to derive tighter distributional bounds in order to mitigate the effects of co-monotonic assumptions.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 6
    Model-Based Ideal Testing of Gui Programs-Approach and Case Studies
    (IEEE-Inst Electrical Electronics Engineers inc, 2021) Kilincceker, Onur; Silistre, Alper; Belli, Fevzi; Challenger, Moharram
    Traditionally, software testing is aimed at showing the presence of faults. This paper proposes a novel approach to testing graphical user interfaces (GUI) for showing both the presence and absence of faults in the sense of ideal testing. The approach uses a positive testing concept to show that the GUI under consideration (GUC) does what the user expects; to the contrary, the negative testing concept shows that the GUC does not do anything that the user does not expect, building a holistic view. The first step of the approach models the GUC by a finite state machine (FSM) that enables the model-based generation of test cases. This is always possible as the GUIs are considered as strictly sequential processes. The next step converts the FSM to an equivalent regular expression (RE) that will be analyzed first to construct test selection criteria for excluding redundant test cases and construct test coverage criteria for terminating the positive test process. Both criteria enable us to assess the adequacy and efficiency of the positive tests performed. The negative tests will be realized by systematically mutating the FSM to model faults, the absence of which are to be shown. Those mutant FSMs will be handled and assessed in the same way as in positive testing. Two case studies illustrate and validate the approach; the experiments' results will be analyzed to discuss the pros and cons of the techniques introduced.