Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7148
Browse
29 results
Search Results
Article Citation - WoS: 1Citation - Scopus: 1How Software Practitioners Perceive Work-Related Barriers and Benefits Based on Their Educational Backgrounds: Insights From a Survey Study(IEEE, 2023) Ünlü, Hüseyin; Yürüm, Ozan Raşit; Özcan Top, Özden; Demirörs, OnurSurvey results show that software practitioners from nonsoftware-related backgrounds face more barriers, have fewer benefits, and feel less satisfied in their work life. However, these differences reduce with more than 10 years of experience and involvement in software-related graduate programs, certificates, and mentorship.Article Citation - Scopus: 1An Interestingness Measure for Knowledge Bases(Elsevier, 2023) Oğuz, Damla; Soygazi, FatihAssociation rule mining and logical rule mining both aim to discover interesting relationships in data or knowledge. In association rule mining, relationships are identified based on the occurrence of items in a dataset, while in logical rule mining, relationships are determined based on logical relationships between atoms in a knowledge base. Association rule mining has been widely studied in transactional databases, mainly for market basket analysis. Confidence has become the most widely used interesting measure to assess the strength of a rule. Many other interestingness measures have been proposed since confidence can be insufficient to filter negatively associated relationships. Recently, logical rule mining has become an important area of research, as new facts can be inferred by applying discovered logical rules. They can be used for reasoning, identifying potential errors in knowledge bases, and to better understand data. However, there are currently only a few measures for logical rule mining. Furthermore, current measures do not consider relations that can have several objects, called quasi-functions, which can dramatically alter the interestingness of the rule. In this paper, we focus on effectively assessing the strength of logical rules. We propose a new interestingness measure that takes into account two categories of relations, functions and quasi-functions, to assess the degree of certainty of logical rules. We compare our proposed measure with a widely used measure on both synthetic test data and real knowledge bases. We show that it is more effective in indicating rule quality, making it an appropriate interestingness measure for logical rule evaluation. & COPY; 2023 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).Article Citation - WoS: 3Citation - Scopus: 4Scalable Rfid Authentication Protocol Based on Physically Unclonable Functions(Elsevier, 2023) Kurt, Işıl; Alagöz, Fatih; Akgün, MeteRadio Frequency Identification (RFID) technology is commonly used for tracking and identifying objects. However, this technology poses serious security and privacy concerns for individuals carrying the tags. To address these issues, various security protocols have been proposed. Unfortunately, many of these solutions suffer from scalability problems, requiring the back-end server to work linearly in the number of tags for a single tag identification. Some protocols offer O(1) or O(log n) identification complexity but are still susceptible to serious attacks. Few protocols consider attacks on the reader-side. Our proposed RFID authentication protocol eliminates the need for a search in the back-end and leverages Physically Unclonable Functions (PUFs) to securely store tag secrets, making it resistant to tag corruption attacks. It provides constant-time identification without sacrificing privacy and offers log2 n times better identification performance than the state-of-the-art protocol. It ensures destructive privacy for tag holders in the event of reader corruption without any conditions. Furthermore, it enables offline readers to maintain destructive privacy in case of corruption.Article Citation - WoS: 3Citation - Scopus: 6An Exploratory Case Study Using Events as a Software Size Measure(Springer, 2023) Hacaloğlu, Tuna; Demirörs, OnurSoftware Size Measurement is a critical task in Software Development Life Cycle (SDLC). It is the primary input for effort estimation models and an important measure for project control and process improvement. There exist various size measurement methods whose successes have already been proven for traditional software architectures and application domains. Being one of them, functional size measurement (FSM) attracts specific attention due to its applicability at the early phases of SDLC. Although FSM methods were successful on the data-base centric, transaction oriented stand-alone applications, in contemporary software development projects, Agile methods are highly used, and a centralized database and a relational approach are not used as before while the requirements suffer from a lack of detail. Today's software is frequently service based, highly distributed, message-driven, scalable and has unprecedented levels of availability. In the new era, event-driven architectures are appearing as one of the emerging approaches where the 'event' concept largely replaces the 'data' concept. Considering the important place of events in contemporary architectures, we focused on approaching the software size measurement problem from the event-driven perspective. This situation guided us to explore how useful event as a size measure in comparison to data-movement based methods. The findings of our study indicates that events can be promising for measurement and should be investigated further in detail to be formalized for creating a measurement model thereby providing a replicable approach.Article Citation - WoS: 41Citation - Scopus: 50Bim-Carem: Assessing the Bim Capabilities of Design, Construction and Facilities Management Processes in the Construction Industry(Elsevier, 2023) Gökçen, Yılmaz; Akçamete, Aslı; Demirörs, OnurBIM adoption has accelerated worldwide since it is an important enabling technology for digitalisation in the construction industry. Adopting BIM requires transforming the traditional building life cycle stages (planning, design, construction and facilities management) into BIM-integrated project deliveries. Assessing the BIM ca- pabilities of these stages helps organisations to identify gaps in their BIM uses and improve them. There is a lack of a comprehensive model in the literature for assessing the BIM capabilities of individual building life cycle stages and their processes. Existing assessment models focus on assessing the BIM maturity of construction projects and organisations which do not inform the required BIM improvements for individual stages and their processes. Hence, we iteratively developed the Building Information Modelling (BIM) Capability Assessment REference Model (BIM-CAREM) and demonstrated its usability through multiple explanatory case studies per- formed with two international design and engineering companies and two general contractors in Turkey. We assessed the BIM capabilities of design, construction and facility management processes of various buildings i.e. residential, stadiums, hospitals and airports. The results showed that the BIM capability levels of design, con- struction and facility management processes vary within and across the companies.Article Citation - WoS: 4Citation - Scopus: 4Integrative Biological Network Analysis To Identify Shared Genes in Metabolic Disorders(Institute of Electrical and Electronics Engineers, 2022) Tenekeci, Samet; Işık, ZerrinIdentification of common molecular mechanisms in interrelated diseases is essential for better prognoses and targeted therapies. However, complexity of metabolic pathways makes it difficult to discover common disease genes underlying metabolic disorders; and it requires more sophisticated bioinformatics models that combine different types of biological data and computational methods. Accordingly, we built an integrative network analysis model to identify shared disease genes in metabolic syndrome (MS), type 2 diabetes (T2D), and coronary artery disease (CAD). We constructed weighted gene co-expression networks by combining gene expression, protein-protein interaction, and gene ontology data from multiple sources. For 90 different configurations of disease networks, we detected the significant modules by using MCL, SPICi, and Linkcomm graph clustering algorithms. We also performed a comparative evaluation on disease modules to determine the best method providing the highest biological validity. By overlapping the disease modules, we identified 22 shared genes for MS-CAD and T2D-CAD. Moreover, 19 out of these genes were directly or indirectly associated with relevant diseases in the previous medical studies. This study does not only demonstrate the performance of different biological data sources and computational methods in disease-gene discovery, but also offers potential insights into common genetic mechanisms of the metabolic disorders.Article Citation - WoS: 6Citation - Scopus: 10Efficient Privacy-Preserving Whole-Genome Variant Queries(Oxford University Press, 2022) Akgün, Mete; Pfeifer, Nico; Kohlbacher, OliverMotivation: Diagnosis and treatment decisions on genomic data have become widespread as the cost of genome sequencing decreases gradually. In this context, disease-gene association studies are of great importance. However, genomic data are very sensitive when compared to other data types and contains information about individuals and their relatives. Many studies have shown that this information can be obtained from the query-response pairs on genomic databases. In this work, we propose a method that uses secure multi-party computation to query genomic databases in a privacy-protected manner. The proposed solution privately outsources genomic data from arbitrarily many sources to the two non-colluding proxies and allows genomic databases to be safely stored in semi-honest cloud environments. It provides data privacy, query privacy and output privacy by using XOR-based sharing and unlike previous solutions, it allows queries to run efficiently on hundreds of thousands of genomic data. Results: We measure the performance of our solution with parameters similar to real-world applications. It is possible to query a genomic database with 3 000 000 variants with five genomic query predicates under 400 ms. Querying 1 048 576 genomes, each containing 1 000 000 variants, for the presence of five different query variants can be achieved approximately in 6 min with a small amount of dedicated hardware and connectivity. These execution times are in the right range to enable real-world applications in medical research and healthcare. Unlike previous studies, it is possible to query multiple databases with response times fast enough for practical application. To the best of our knowledge, this is the first solution that provides this performance for querying large-scale genomic data.Article Citation - Scopus: 1A Method for Integrated Business Process Modeling and Ontology Development(Emerald, 2022) Coşkunçay, Ahmet; Demirörs, OnurPurpose: From knowledge management point of view, business process models and ontologies are two essential knowledge artifacts for organizations that consume similar information sources. In this study, the PROMPTUM method for integrated process modeling and ontology development that adheres to well-established practices is presented. The method is intended to guide practitioners who develop both ontologies and business process models in the same or similar domains. Design/methodology/approach: The method is supported by a recently developed toolset, which supports the modeling of relations between the ontologies and the labels within the process model collections. This study introduces the method and its companion toolset. An explanatory study, that includes two case studies, is designed and conducted to reveal and validate the benefits of using the method. Then, a follow-up semi-structured interview identifies the perceived benefits of the method. Findings: Application of the method revealed several benefits including the improvements observed in the consistency and completeness of the process models and ontologies. The method is bringing the best practices in two domains together and guiding the use of labels within process model collections in ontology development and ontology resources in business process modeling. Originality/value: The proposed method with its tool support is a pioneer in enabling to manage the labels and terms within the labels in process model collections consistently with ontology resources. Establishing these relations enables the definition and management of process model elements as resources in domain ontologies. Once the PROMPTUM method is utilized, a related resource is managed as a single resource representing the same real-world object in both artifacts. An explanatory study has shown that improvement in consistency and completeness of process models and ontologies is possible with integrated process modeling and ontology development.Article Citation - WoS: 7Citation - Scopus: 8Tracking Code Bug Fix Ripple Effects Based on Change Patterns Using Markov Chain Models(Institute of Electrical and Electronics Engineers Inc., 2022) Ufuktepe, Ekincan; Tuğlular, Tuğkan; Palaniappan, KanappanChange impact analysis evaluates the changes that are made in the software and finds the ripple effects, in other words, finds the affected software components. Code changes and bug fixes can have a high impact on code quality by introducing new vulnerabilities or increasing their severity. A recent high-visibility example of this is the code changes in the log4j web software CVE-2021-45105 to fix known vulnerabilities by removing and adding method called change types. This bug fix process exposed further code security concerns. In this article, we analyze the most common set of bug fix change patterns to have a better understanding of the distribution of software changes and their impact on code quality. To achieve this, we implemented a tool that compares two versions of the code and extracts the changes that have been made. Then, we investigated how these changes are related to change impact analysis. In our case study, we identified the change types for bug-inducing and bug fix changes using the Quixbugs dataset. Furthermore, we used 13 of the projects and 621 bugs from Defects4J to identify the common change types in bug fixes. Then, to find the change types that cause an impact on the software, we performed an impact analysis on a subset of projects and bugs of Defects4J. The results have shown that, on average, 90% of the bug fix change types are adding a new method declaration and changing the method body. Then, we investigated if these changes cause an impact or a ripple effect in the software by performing a Markov chain-based change impact analysis. The results show that the bug fix changes had only impact rates within a range of 0.4-5%. Furthermore, we performed a statistical correlation analysis to find if any of the bug fixes have a significant correlation with the impact of change. The results have shown that there is a negative correlation between caused impact with the change types adding new method declaration and changing method body. On the other hand, we found that there is a positive correlation between caused impact and changing the field type.Article Citation - WoS: 7Citation - Scopus: 8Long-Term Image-Based Vehicle Localization Improved With Learnt Semantic Descriptors(Elsevier, 2022) Çınaroğlu, İbrahim; Baştanlar, YalınVision based solutions for the localization of vehicles have become popular recently. In this study, we employ an image retrieval based visual localization approach, in which database images are kept with GPS coordinates and the location of the retrieved database image serves as the position estimate of the query image in a city scale driving scenario. Regarding this approach, most existing studies only use descriptors extracted from RGB images and do not exploit semantic content. We show that localization can be improved via descriptors extracted from semantically segmented images, especially when the environment is subjected to severe illumination, seasonal or other long-term changes. We worked on two separate visual localization datasets, one of which (Malaga Streetview Challenge) has been generated by us and made publicly available. Following the extraction of semantic labels in images, we trained a CNN model for localization in a weakly-supervised fashion with triplet ranking loss. The optimized semantic descriptor can be used on its own for localization or preferably it can be used together with a state-of-the-art RGB image based descriptor in hybrid fashion to improve accuracy. Our experiments reveal that the proposed hybrid method is able to increase the localization performance of the standard (RGB image based) approach up to 7.7% regarding Top-1 Recall values.
- «
- 1 (current)
- 2
- 3
- »
