Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 43
  • Article
    Link Prediction for Completing Graphical Software Models Using Neural Networks
    (IEEE, 2023) Leblebici, Onur; Tuğlular, Tuğkan; Belli, Fevzi
    Deficiencies and inconsistencies introduced during the modeling of software systems may result in high costs and negatively impact the quality of all developments performed using these models. Therefore, developing more accurate models will aid software architects in developing software systems that match and exceed expectations. This paper proposes a graph neural network (GNN) method for predicting missing connections, or links, in graphical models, which are widely employed in modeling software systems. The proposed method utilizes graphs as allegedly incomplete, primitive graphical models of the system under consideration (SUC) as input and proposes links between its elements through the following steps: (i) transform the models into graph-structured data and extract features from the nodes, (ii) train the GNN model, and (iii) evaluate the performance of the trained model. Two GNN models based on SEAL and DeepLinker are evaluated using three performance metrics, namely cross-entropy loss, area under curve, and accuracy. Event sequence graphs (ESGs) are used as an example of applying the approach to an event-based behavioral modeling technique. Examining the results of experiments conducted on various datasets and variations of GNN reveals that missing connections between events in an ESG can be predicted even with relatively small datasets generated from ESG models. Author
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    How Software Practitioners Perceive Work-Related Barriers and Benefits Based on Their Educational Backgrounds: Insights From a Survey Study
    (IEEE, 2023) Ünlü, Hüseyin; Yürüm, Ozan Raşit; Özcan Top, Özden; Demirörs, Onur
    Survey results show that software practitioners from nonsoftware-related backgrounds face more barriers, have fewer benefits, and feel less satisfied in their work life. However, these differences reduce with more than 10 years of experience and involvement in software-related graduate programs, certificates, and mentorship.
  • Conference Object
    Kurt saldırıları için sentetik irislerde örnek seçilimi
    (IEEE, 2023) Akdeniz, Eyüp Kaan; Erdoğmuş, Nesli
    In this study, samples with higher potential to succeed in wolf attacks are picked among synthetically generated iris images, and the composed subset is shown to pose a more significant threat toward an iris recognition system backed by a Presentation Attack Detection (PAD) module with respect to randomly selected samples. Iris images generated by Deep Convolutional Generative Adversarial Networks (DCGAN) are firstly filtered by rejection sampling on PAD score distribution of real iris image PAD scores. Next, the probability of zero success in all attack attempts is calculated for each synthetic iris image, using real iris images in the training set, and match and non-match score distributions are calculated on those. Synthetic images with the lowest probabilities of zero success are included in the final set. Our hypothesis that this set would be more successful in wolf attacks is tested by comparing its spoofing performances with randomly selected sample sets.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 4
    Application of the Law of Minimum and Dissimilarity Analysis To Regression Test Case Prioritization
    (IEEE, 2023) Ufuktepe, Ekincan; Tuğlular, Tuğkan
    Regression testing is one of the most expensive processes in testing. Prioritizing test cases in regression testing is critical for the goal of detecting the faults sooner within a large set of test cases. We propose a test case prioritization (TCP) technique for regression testing called LoM-Score inspired by the Law of Minimum (LoM) from biology. This technique calculates the impact probabilities of methods calculated by change impact analysis with forward slicing and orders test cases according to LoM. However, this ordering doesn't consider the possibility that consecutive test cases may be covering the same methods repeatedly. Thereby, such ordering can delay the time of revealing faults that exist in other methods. To solve this problem, we enhance the LoM-Score TCP technique with an adaptive approach, namely with a dissimilarity-based coordinate analysis approach. The dissimilarity-based coordinate analysis uses Jaccard Similarity for calculating the similarity coefficients between test cases in terms of covered methods and the enhanced technique called Dissimilarity-LoM-Score (Dis-LoM-Score) applies a penalty with respective on the ordered test cases. We performed our case study on 10 open-source Java projects from Defects4J, which is a dataset of real bugs and an infrastructure for controlled experiments provided for software engineering researchers. Then, we hand-seeded multiple mutants generated by Major, which is a mutation testing tool. Then we compared our TCP techniques LoM-Score and Dis-LoM-Score with the four traditional TCP techniques based on their Average Percentage of Faults Detected (APFD) results.
  • Conference Object
    Citation - WoS: 2
    Citation - Scopus: 2
    Effort Prediction With Limited Data: a Case Study for Data Warehouse Projects
    (IEEE, 2022) Unlu, Huseyin; Yildiz, Ali; Demirors, Onur
    Organizations may create a sustainable competitive advantage against competitors by using data warehouse systems with which they can assess the current status of their operations at any moment. They can analyze trends and connections using up-to-date data. However, data warehouse projects tend to fail more often than other projects as it can be tough to estimate the effort required to build a data warehouse system. Functional size measurement is one of the methods used as an input for estimating the amount of work in a software project. In this study, we formed a measurement basis for DWH projects in an organization based on the COSMIC Functional Size Measurement Method. We mapped COSMIC rules on two different architectures used for DWH projects in the organization and measured the size of the projects. We calculated the productivity of the projects and compared them with the organization's previous projects and DWH projects in the ISBSG repository. We could not create an organization-wide effort estimation model as we had a limited number of projects. As an alternative, we evaluated the success of effort estimation using DWH projects in the ISBSG repository. We also reported the challenges we faced during the size measurement process.
  • Conference Object
    Citation - WoS: 7
    Citation - Scopus: 12
    Utilization of Three Software Size Measures for Effort Estimation in Agile World: a Case Study
    (IEEE, 2022) Unlu, Huseyin; Hacaloglu, Tuna; Buber, Fatma; Berrak, Kivilcim; Leblebici, Onur; Demirors, Onur
    Functional size measurement (FSM) methods, by being systematic and repeatable, are beneficial in the early phases of the software life cycle for core project management activities such as effort, cost, and schedule estimation. However, in agile projects, requirements are kept minimal in the early phases and are detailed over time as the project progresses. This situation makes it challenging to identify measurement components of FSM methods from requirements in the early phases, hence complicates applying FSM in agile projects. In addition, the existing FSM methods are not fully compatible with today's architectural styles, which are evolving into event-driven decentralized structures. In this study, we present the results of a case study to compare the effectiveness of different size measures: functional -COSMIC Function Points (CFP)-, event-based - Event Points-, and code length-based - Line of Code (LOC)- on projects that were developed with agile methods and utilized a microservice- based architecture. For this purpose, we measured the size of the project and created effort estimation models based on three methods. It is found that the event-based method estimated effort with better accuracy than the CFP and LOC-based methods.
  • Article
    Citation - WoS: 16
    Citation - Scopus: 25
    A Privacy-Preserving Scheme for Smart Grid Using Trusted Execution Environment
    (IEEE, 2023) Akgün, Mete; Üstündağ Soykan, Elif; Soykan, Gürkan
    The increasing transformation from the legacy power grid to the smart grid brings new opportunities and challenges to power system operations. Bidirectional communications between home-area devices and the distribution system empower smart grid functionalities. More granular energy consumption data flows through the grid and enables better smart grid applications. This may also lead to privacy violations since the data can be used to infer the consumer's residential behavior, so-called power signature. Energy utilities mostly aggregate the data, especially if the data is shared with stakeholders for the management of market operations. Although this is a privacy-friendly approach, recent works show that this does not fully protect privacy. On the other hand, some applications, like nonintrusive load monitoring, require disaggregated data. Hence, the challenging problem is to find an efficient way to facilitate smart grid operations without sacrificing privacy. In this paper, we propose a privacy-preserving scheme that leverages consumer privacy without reducing accuracy for smart grid applications like load monitoring. In the proposed scheme, we use a trusted execution environment (TEE) to protect the privacy of the data collected from smart appliances (SAs). The scheme allows customer-oriented smart grid applications as the scheme does not use regular aggregation methods but instead uses customer-oriented aggregation to provide privacy. Hence the accuracy loss stemming from disaggregation is prevented. Our scheme protects the transferred consumption data all the way from SAs to Utility so that possible false data injection attacks on the smart meter that aims to deceive the energy request from the grid are also prevented. We conduct security and game-based privacy analysis under the threat model and provide performance analysis of our implementation. Our results demonstrate that the proposed method overperforms other privacy methods in terms of communication and computation cost. The execution time of aggregation for 10,000 customers, each has 20 SAs is approximately 1 second. The decryption operations performed on the TEE have a linear complexity e.g., 172800 operations take around 1 second while 1728000 operations take around 10 seconds. These results can scale up using cloud or hyper-scalers for real-world applications as our scheme performs offline aggregation.
  • Article
    Citation - WoS: 2
    Citation - Scopus: 6
    Incremental Testing in Software Product Lines-An Event Based Approach
    (IEEE, 2023) Beyazıt, Mutlu; Tuğlular, Tuğkan; Öztürk Kaya, Dilek
    One way of developing fast, effective, and high-quality software products is to reuse previously developed software components and products. In the case of a product family, the software product line (SPL) approach can make reuse more effective. The goal of SPLs is faster development of low-cost and high-quality software products. This paper proposes an incremental model-based approach to test products in SPLs. The proposed approach utilizes event-based behavioral models of the SPL features. It reuses existing event-based feature models and event-based product models along with their test cases to generate test cases for each new product developed by adding a new feature to an existing product. Newly introduced featured event sequence graphs (FESGs) are used for behavioral feature and product modeling; thus, generated test cases are event sequences. The paper presents evaluations with three software product lines to validate the approach and analyze its characteristics by comparing it to the state-of-the-art ESG-based testing approach. Results show that the proposed incremental testing approach highly reuses the existing test sets as intended. Also, it is superior to the state-of-the-art approach in terms of fault detection effectiveness and test generation effort but inferior in terms of test set size and test execution effort.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    A Domain-Specific Language for the Document-Based Model-Driven Engineering of Business Applications
    (IEEE, 2022) Leblebici, Onur; Kardaş, Geylani; Tuğlular, Tuğkan
    To facilitate the development of business applications, a domain-specific language (DSL), called DARC, is introduced in this paper. Business documents including the descriptions of the responsibilities, authorizations, and collaborations, are used as the first-class entities during model-driven engineering (MDE) with DARC. Hence the implementation of the business applications can be automatically achieved from the corresponding document models. The evaluation of using DARC DSL for the development of commercial business software was performed in an international sales, logistics, and service solution provider company. The results showed that the code for all business documents and more than 50% of the responsibility descriptions composing the business applications could be generated automatically by modeling with DARC. Finally, according to the users' feedback, the assessment clearly revealed the adoption of DARC features in terms of the DSL quality characteristics, namely functional suitability, usability, reliability, maintainability, productivity, extensibility, compatibility, and expressiveness.
  • Conference Object
    Citation - WoS: 2
    Citation - Scopus: 3
    Secure Iot Update Using Blockchain
    (IEEE, 2021) Kaptan, Melike; Tomur, Emrah; Ayav, Tolga; Erten, Yusuf Murat
    In this study a platform is devised to send automatic remote updates for embedded devices. In this scenario there are Original Equipment Manufacturers (OEMs), Software suppliers, blockchain nodes, Gateways and embedded devices. OEMs and software suppliers are there to keep their software on Inter Planetary File System (IPFS) and send the meta-data and hashes of their software to the blockchain nodes in order to keep this information distributed and ready to be requested and used. There are also gateways which are the members of the blockchain and the IPFS network. Gateways are responsible for asking for a specific update for specific devices from IPFS database using the meta-data kept on the blockchain, and they will send those hashed secure updates to the devices. In order to provide a traceable data keeping platform, gateway update operations are handled as transactions in a second blockchain network which is the clockchain of the gateways. The system was implemented as of the two separate blockchain networks and it has been shown that, despite the calculation overhead of the member devices, by separating the functions between the two blockchain networks a more reliable and secure platform can be achieved.