Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
56 results
Filters
Settings
Search Results
Conference Object Enhancing genomic data sharing with blockchain-enabled dynamic consent in beacon V2(Springernature, 2024) Binokay, Leman; Celik, Hamit Mervan; Gurdal, Gultekin; Ayav, Tolga; Tuglular, Tugkan; Oktay, Yavuz; Karakulah, GokhanArticle Link Prediction for Completing Graphical Software Models Using Neural Networks(IEEE, 2023) Leblebici, Onur; Tuğlular, Tuğkan; Belli, FevziDeficiencies and inconsistencies introduced during the modeling of software systems may result in high costs and negatively impact the quality of all developments performed using these models. Therefore, developing more accurate models will aid software architects in developing software systems that match and exceed expectations. This paper proposes a graph neural network (GNN) method for predicting missing connections, or links, in graphical models, which are widely employed in modeling software systems. The proposed method utilizes graphs as allegedly incomplete, primitive graphical models of the system under consideration (SUC) as input and proposes links between its elements through the following steps: (i) transform the models into graph-structured data and extract features from the nodes, (ii) train the GNN model, and (iii) evaluate the performance of the trained model. Two GNN models based on SEAL and DeepLinker are evaluated using three performance metrics, namely cross-entropy loss, area under curve, and accuracy. Event sequence graphs (ESGs) are used as an example of applying the approach to an event-based behavioral modeling technique. Examining the results of experiments conducted on various datasets and variations of GNN reveals that missing connections between events in an ESG can be predicted even with relatively small datasets generated from ESG models. AuthorArticle Citation - WoS: 9Citation - Scopus: 13Microservice-Based Projects in Agile World: a Structured Interview(Elsevier, 2024) Unlu, Huseyin; Kennouche, Dhia Eddine; Soylu, Gorkem Kiling; Demirors, OnurContext: During the last decade, Microservice-based software architecture (MSSA) has been a preferred design paradigm for a growing number of companies. MSSA, specifically in the form of reactive systems, has substantial differences from the more conventional design paradigms, such as object-oriented analysis and design. Therefore, adaptation demands software organizations to transform their culture. However, there is a lack of research studies that explore common practices utilized by software companies that implement MSSAs.Objective: In this study, our goal is to get an insight into how practices such as an agile methodology, software analysis, design, test, size measurement, and effort estimation are performed in software projects which embrace the Microservice-based software architecture paradigm. Together with the identification of practices utilized for the MSSA paradigm, we aim to determine the challenges organizations face to adopt microservice-based software architectures.Method: We performed a structured interview with participants coming from 20 different organizations over different roles, domains, and countries to collect information on their views, experience, and the challenges faced.Results: Our results reveal that organizations find agile development compatible with microservices. In general, they continue to use traditional object-oriented modeling notations for analysis and design in an abstract way. They continue to use the same subjective size measurement and effort estimation approaches that they were using previously in traditional architectures. However, they face unique challenges in developing microservices.Conclusion: Although organizations face challenges, practitioners continue to use familiar techniques that they have been using for traditional architectures. The results provide a snapshot of the software industry that utilizes microservices.Article Citation - WoS: 3Citation - Scopus: 4Application of the Law of Minimum and Dissimilarity Analysis To Regression Test Case Prioritization(IEEE, 2023) Ufuktepe, Ekincan; Tuğlular, TuğkanRegression testing is one of the most expensive processes in testing. Prioritizing test cases in regression testing is critical for the goal of detecting the faults sooner within a large set of test cases. We propose a test case prioritization (TCP) technique for regression testing called LoM-Score inspired by the Law of Minimum (LoM) from biology. This technique calculates the impact probabilities of methods calculated by change impact analysis with forward slicing and orders test cases according to LoM. However, this ordering doesn't consider the possibility that consecutive test cases may be covering the same methods repeatedly. Thereby, such ordering can delay the time of revealing faults that exist in other methods. To solve this problem, we enhance the LoM-Score TCP technique with an adaptive approach, namely with a dissimilarity-based coordinate analysis approach. The dissimilarity-based coordinate analysis uses Jaccard Similarity for calculating the similarity coefficients between test cases in terms of covered methods and the enhanced technique called Dissimilarity-LoM-Score (Dis-LoM-Score) applies a penalty with respective on the ordered test cases. We performed our case study on 10 open-source Java projects from Defects4J, which is a dataset of real bugs and an infrastructure for controlled experiments provided for software engineering researchers. Then, we hand-seeded multiple mutants generated by Major, which is a mutation testing tool. Then we compared our TCP techniques LoM-Score and Dis-LoM-Score with the four traditional TCP techniques based on their Average Percentage of Faults Detected (APFD) results.Article Citation - Scopus: 1An Interestingness Measure for Knowledge Bases(Elsevier, 2023) Oğuz, Damla; Soygazi, FatihAssociation rule mining and logical rule mining both aim to discover interesting relationships in data or knowledge. In association rule mining, relationships are identified based on the occurrence of items in a dataset, while in logical rule mining, relationships are determined based on logical relationships between atoms in a knowledge base. Association rule mining has been widely studied in transactional databases, mainly for market basket analysis. Confidence has become the most widely used interesting measure to assess the strength of a rule. Many other interestingness measures have been proposed since confidence can be insufficient to filter negatively associated relationships. Recently, logical rule mining has become an important area of research, as new facts can be inferred by applying discovered logical rules. They can be used for reasoning, identifying potential errors in knowledge bases, and to better understand data. However, there are currently only a few measures for logical rule mining. Furthermore, current measures do not consider relations that can have several objects, called quasi-functions, which can dramatically alter the interestingness of the rule. In this paper, we focus on effectively assessing the strength of logical rules. We propose a new interestingness measure that takes into account two categories of relations, functions and quasi-functions, to assess the degree of certainty of logical rules. We compare our proposed measure with a widely used measure on both synthetic test data and real knowledge bases. We show that it is more effective in indicating rule quality, making it an appropriate interestingness measure for logical rule evaluation. & COPY; 2023 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).Article Citation - WoS: 3Citation - Scopus: 4Scalable Rfid Authentication Protocol Based on Physically Unclonable Functions(Elsevier, 2023) Kurt, Işıl; Alagöz, Fatih; Akgün, MeteRadio Frequency Identification (RFID) technology is commonly used for tracking and identifying objects. However, this technology poses serious security and privacy concerns for individuals carrying the tags. To address these issues, various security protocols have been proposed. Unfortunately, many of these solutions suffer from scalability problems, requiring the back-end server to work linearly in the number of tags for a single tag identification. Some protocols offer O(1) or O(log n) identification complexity but are still susceptible to serious attacks. Few protocols consider attacks on the reader-side. Our proposed RFID authentication protocol eliminates the need for a search in the back-end and leverages Physically Unclonable Functions (PUFs) to securely store tag secrets, making it resistant to tag corruption attacks. It provides constant-time identification without sacrificing privacy and offers log2 n times better identification performance than the state-of-the-art protocol. It ensures destructive privacy for tag holders in the event of reader corruption without any conditions. Furthermore, it enables offline readers to maintain destructive privacy in case of corruption.Article Citation - WoS: 16Citation - Scopus: 25A Privacy-Preserving Scheme for Smart Grid Using Trusted Execution Environment(IEEE, 2023) Akgün, Mete; Üstündağ Soykan, Elif; Soykan, GürkanThe increasing transformation from the legacy power grid to the smart grid brings new opportunities and challenges to power system operations. Bidirectional communications between home-area devices and the distribution system empower smart grid functionalities. More granular energy consumption data flows through the grid and enables better smart grid applications. This may also lead to privacy violations since the data can be used to infer the consumer's residential behavior, so-called power signature. Energy utilities mostly aggregate the data, especially if the data is shared with stakeholders for the management of market operations. Although this is a privacy-friendly approach, recent works show that this does not fully protect privacy. On the other hand, some applications, like nonintrusive load monitoring, require disaggregated data. Hence, the challenging problem is to find an efficient way to facilitate smart grid operations without sacrificing privacy. In this paper, we propose a privacy-preserving scheme that leverages consumer privacy without reducing accuracy for smart grid applications like load monitoring. In the proposed scheme, we use a trusted execution environment (TEE) to protect the privacy of the data collected from smart appliances (SAs). The scheme allows customer-oriented smart grid applications as the scheme does not use regular aggregation methods but instead uses customer-oriented aggregation to provide privacy. Hence the accuracy loss stemming from disaggregation is prevented. Our scheme protects the transferred consumption data all the way from SAs to Utility so that possible false data injection attacks on the smart meter that aims to deceive the energy request from the grid are also prevented. We conduct security and game-based privacy analysis under the threat model and provide performance analysis of our implementation. Our results demonstrate that the proposed method overperforms other privacy methods in terms of communication and computation cost. The execution time of aggregation for 10,000 customers, each has 20 SAs is approximately 1 second. The decryption operations performed on the TEE have a linear complexity e.g., 172800 operations take around 1 second while 1728000 operations take around 10 seconds. These results can scale up using cloud or hyper-scalers for real-world applications as our scheme performs offline aggregation.Article Citation - WoS: 5Citation - Scopus: 9P/Key: Puf Based Second Factor Authentication(Public Library of Science, 2023) Uysal, Ertan; Akgün, MeteOne-time password (OTP) mechanisms are widely used to strengthen authentication processes. In time-based one-time password (TOTP) mechanisms, the client and server store common secrets. However, once the server is compromised, the client’s secrets are easy to obtain. To solve this issue, hash-chain-based second-factor authentication protocols have been proposed. However, these protocols suffer from latency in the generation of OTPs on the client side because of the hash-chain traversal. Secondly, they can generate only a limited number of OTPs as it depends on the length of the hash-chain. In this paper, we propose a second-factor authentication protocol that utilizes Physically Unclonable Functions (PUFs) to overcome these problems. In the proposed protocol, PUFs are used to store the secrets of the clients securely on the server. In case of server compromise, the attacker cannot obtain the seeds of clients’ secrets and can not generate valid OTPs to impersonate the clients. In the case of physical attacks, including side-channel attacks on the server side, our protocol has a mechanism that prevents attackers from learning the secrets of a client interacting with the server. Furthermore, our protocol does not incur any client-side delay in OTP generation.Article Citation - WoS: 41Citation - Scopus: 50Bim-Carem: Assessing the Bim Capabilities of Design, Construction and Facilities Management Processes in the Construction Industry(Elsevier, 2023) Gökçen, Yılmaz; Akçamete, Aslı; Demirörs, OnurBIM adoption has accelerated worldwide since it is an important enabling technology for digitalisation in the construction industry. Adopting BIM requires transforming the traditional building life cycle stages (planning, design, construction and facilities management) into BIM-integrated project deliveries. Assessing the BIM ca- pabilities of these stages helps organisations to identify gaps in their BIM uses and improve them. There is a lack of a comprehensive model in the literature for assessing the BIM capabilities of individual building life cycle stages and their processes. Existing assessment models focus on assessing the BIM maturity of construction projects and organisations which do not inform the required BIM improvements for individual stages and their processes. Hence, we iteratively developed the Building Information Modelling (BIM) Capability Assessment REference Model (BIM-CAREM) and demonstrated its usability through multiple explanatory case studies per- formed with two international design and engineering companies and two general contractors in Turkey. We assessed the BIM capabilities of design, construction and facility management processes of various buildings i.e. residential, stadiums, hospitals and airports. The results showed that the BIM capability levels of design, con- struction and facility management processes vary within and across the companies.Article Citation - WoS: 2Citation - Scopus: 6Incremental Testing in Software Product Lines-An Event Based Approach(IEEE, 2023) Beyazıt, Mutlu; Tuğlular, Tuğkan; Öztürk Kaya, DilekOne way of developing fast, effective, and high-quality software products is to reuse previously developed software components and products. In the case of a product family, the software product line (SPL) approach can make reuse more effective. The goal of SPLs is faster development of low-cost and high-quality software products. This paper proposes an incremental model-based approach to test products in SPLs. The proposed approach utilizes event-based behavioral models of the SPL features. It reuses existing event-based feature models and event-based product models along with their test cases to generate test cases for each new product developed by adding a new feature to an existing product. Newly introduced featured event sequence graphs (FESGs) are used for behavioral feature and product modeling; thus, generated test cases are event sequences. The paper presents evaluations with three software product lines to validate the approach and analyze its characteristics by comparing it to the state-of-the-art ESG-based testing approach. Results show that the proposed incremental testing approach highly reuses the existing test sets as intended. Also, it is superior to the state-of-the-art approach in terms of fault detection effectiveness and test generation effort but inferior in terms of test set size and test execution effort.
