Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 26
  • Conference Object
    Citation - WoS: 37
    Graph Theoretic Clustering Algorithms in Mobile Ad Hoc Networks and Wireless Sensor Networks (survey)
    (Azerbaijan National Academy of Sciences, 2007) Erciyeş, Kayhan; Dağdeviren, Orhan; Çokuslu, Deniz; Özsoyeller, Deniz
    Clustering in mobile ad hoc networks (MANETs) and wireless sensor networks (WSNs) is an important method to ease topology management and routing in such networks. Once the clusters are formed, the leaders (coordinators) of the clusters may be used to form a backbone for efficient routing and communication purposes. A set of clusters may also provide the underlying physical structure for multicast communication for a higher level group communication module which may effectively be used for fault tolerance and key management for security purposes. We survey graph theoretic approaches for clustering in MANETs and WSNS and show that although there is a wide range of such algorithms, each may be suitable for a different cross-layer design objective.
  • Article
    Citation - WoS: 12
    Citation - Scopus: 21
    A Change Management Model and Its Application in Software Development Projects
    (Elsevier, 2019) Efe, Pınar; Demirörs, Onur
    Change is inevitable in software projects and software engineers strive to find ways to manage changes. A complete task could be easily in a team's agenda sometime later due to change demands. Change demands are caused by failures and/or improvements and require additional effort which in most cases have not been planned upfront and affect project progress significantly. Earned Value Management (EVM) is a powerful performance management and feedback tool for project management. EVM depicts the project progress in terms of scope, cost, and schedule and provides future predictions based on trends and patterns of the past. Even though EVM works quite well and widely used in disciplines like construction and mining, it is not the case for software discipline. Software projects require special attention and adoption for change. In this study, we present a model to measure change and subsequent rework and evolution costs to monitor software projects accurately. We have performed five case studies in five different companies to explore the usability of the proposed model. This paper depicts the proposed model and discusses the results of the case studies.
  • Editorial
    Computational Mirnomics
    (Informationsmanagement in der Biotechnologie e.V. (IMBio e.V.), 2016) Allmer, Jens; Yousef, Malik
    The term MicroRNA or its contraction miRNA currently appears in 21,215 titles of abstracts, published between 1997 and now, available on Pubmed (2016-21-22:12:59 EET). 4,108 of these were published in 2016 alone which signifies the importance of miRNA-related research. MicroRNAs can be detected experimentally using various techniques like directional cloning of endogenous small RNAs but they are time consuming [1]. Additionally, it is necessary for the miRNA and its mRNA target(s) to be co-expressed to infer a functional relationship which is difficult, if not impossible, to achieve [2]. Since experimental approaches are facing such difficulties, they have been complemented by computational approaches [3] thereby defining the field of computational miRNomics.
  • Article
    Citation - WoS: 7
    Citation - Scopus: 5
    A Machine Learning Approach for Microrna Precursor Prediction in Retro-Transcribing Virus Genomes
    (Informationsmanagement in der Biotechnologie e.V. (IMBio e.V.), 2016) Saçar Demirci, Müşerref Duygu; Toprak, Mustafa; Allmer, Jens
    Identification of microRNA (miRNA) precursors has seen increased efforts in recent years. The difficulty in experimental detection of pre-miRNAs increased the usage of computational approaches. Most of these approaches rely on machine learning especially classification. In order to achieve successful classification, many parameters need to be considered such as data quality, choice of classifier settings, and feature selection. For the latter one, we developed a distributed genetic algorithm on HTCondor to perform feature selection. Moreover, we employed two widely used classification algorithms libSVM and random forest with different settings to analyze the influence on the overall classification performance. In this study we analyzed 5 human retro virus genomes; Human endogenous retrovirus K113, Hepatitis B virus (strain ayw), Human T lymphotropic virus 1, Human T lymphotropic virus 2, Human immunodeficiency virus 2, and Human immunodeficiency virus 1. We then predicted pre-miRNAs by using the information from known virus and human pre-miRNAs. Our results indicate that these viruses produce novel unknown miRNA precursors which warrant further experimental validation.
  • Article
    Citation - WoS: 58
    Citation - Scopus: 72
    A Reference Model for Bim Capability Assessments
    (Elsevier Ltd., 2019) Yılmaz, Gökçen; Akçamete, Aslı; Demirörs, Onur
    Various BIM capability and maturity models have been developed to assist architecture, engineering, construction and facilities management (AEC/FM) organizations in measuring the performance of their BIM utilizations. Due to differences in applicability and focus of these models, they are able to meet the demands of different BIM users. In this study, eight BIM capability and maturity models identified in the literature are compared based on several different criteria. The results show that there is no holistic model that includes process definitions that cover the facility life-cycle and contains measures for assessing all of these AEC/FM processes. A reference model for assessing BIM capability of AEC/FM processes was developed. It was grounded on the meta-model of ISO/IEC 330xx family of standards and developed iteratively via expert reviews and an exploratory case study. It includes AEC/FM processes which were evaluated using the BIM capability levels, their associated BIM attributes, and a four-point rating scale. BIM-CAREM was evaluated by conducting four explanatory case studies. The results showed that BIM-CAREM was capable of identifying BIM capabilities of different AEC/FM processes.
  • Article
    Citation - WoS: 23
    Citation - Scopus: 31
    A Survey on Multithreading Alternatives for Soft Error Fault Tolerance
    (Association for Computing Machinery (ACM), 2019) Öz, Işıl; Arslan, Sanem
    Smaller transistor sizes and reduction in voltage levels in modern microprocessors induce higher soft error rates. This trend makes reliability a primary design constraint for computer systems. Redundant multithreading (RMT) makes use of parallelism in modern systems by employing thread-level time redundancy for fault detection and recovery. RMT can detect faults by running identical copies of the program as separate threads in parallel execution units with identical inputs and comparing their outputs. In this article, we present a survey of RMT implementations at different architectural levels with several design considerations. We explain the implementations in seminal papers and their extensions and discuss the design choices employed by the techniques. We review both hardware and software approaches by presenting the main characteristics and analyze the studies with different design choices regarding their strengths and weaknesses. We also present a classification to help potential users find a suitable method for their requirement and to guide researchers planning to work on this area by providing insights into the future trend.
  • Article
    Citation - WoS: 2
    Citation - Scopus: 2
    Density-Aware Cellular Coverage Control: Interference-Based Density Estimation
    (Elsevier, 2019) Eroğlu, Alperen; Yaman, Okan; Onur, Ertan
    As demand for mobile communications increases, cells have to become smaller to efficiently use the scarce spectrum and to increase capacity, and small-cell networks will hereby emerge. They may be large in scale and highly dynamic resembling ad hoc networks due to the moving base stations. The variations in the density of the small cell networks impact the quality of service and introduce many novel challenges such as coverage control. We propose two novel base station density estimators, the interference-based density estimator (IDE) and the multi-access edge cloud-based density estimator (CDE) in a three-dimensional field. The estimators employ received signal strength measurements. We validate these two density estimators by using Monte-Carlo simulations. Furthermore, we analyze the impact of density on network outage in cellular networks and propose a density-aware cell zooming technique. According to the observations, base station (BS) density affects network coverage significantly. Received signal strength (RSS)-based density estimators can easily be implemented and applied in the network communication stack although they are more prone to shadowing and fading. Under favour of the density-aware cell zooming method, the network outage can be managed dynamically by adapting the transmit power, which provides a self-configurable and -organized network. (C) 2019 Elsevier B.V. All rights reserved.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 9
    Process Ontology Development Using Natural Language Processing: a Multiple Case Study
    (Emerald Group Publishing, 2019) Gürbüz, Özge; Rabhi, Fethi; Demirörs, Onur
    Purpose: Integrating ontologies with process modeling has gained increasing attention in recent years since it enhances data representations and makes it easier to query, store and reuse knowledge at the semantic level. The authors focused on a process and ontology integration approach by extracting the activities, roles and other concepts related to the process models from organizational sources using natural language processing techniques. As part of this study, a process ontology population (PrOnPo) methodology and tool is developed, which uses natural language parsers for extracting and interpreting the sentences and populating an event-driven process chain ontology in a fully automated or semi-automated (user assisted) manner. The purpose of this paper is to present applications of PrOnPo tool in different domains. Design/methodology/approach: A multiple case study is conducted by selecting five different domains with different types of guidelines. Process ontologies are developed using the PrOnPo tool in a semi-automated and fully automated fashion and manually. The resulting ontologies are compared and evaluated in terms of time-effort and recall-precision metrics. Findings: From five different domains, the results give an average of 70 percent recall and 80 percent precision for fully automated usage of the PrOnPo tool, showing that it is applicable and generalizable. In terms of efficiency, the effort spent for process ontology development is decreased from 250 person-minutes to 57 person-minutes (semi-automated). Originality/value: The PrOnPo tool is the first one to automatically generate integrated process ontologies and process models from guidelines written in natural language. © 2018, Emerald Publishing Limited.
  • Article
    Citation - WoS: 67
    Citation - Scopus: 91
    A Survey on Modeling and Model-Driven Engineering Practices in the Embedded Software Industry
    (Elsevier Ltd., 2018) Akdur, Deniz; Garousi, Vahid; Demirörs, Onur
    Software-intensive embedded systems have become an essential aspect of our lives. To cope with its growing complexity, modeling and model-driven engineering (MDE) are widely used for analysis, design, implementation, and testing of these systems. Since a large variety of software modeling practices is used in the domain of embedded software, it is important to understand and characterize the-state-of-the-practices and also the benefits, challenges and consequences of using software modeling approaches in this domain. The goal of this study is to investigate those practices in the embedded software engineering projects by identifying to what degree, why and how software modeling and MDE are used. To achieve this objective, we designed and conducted an online survey. Opinions of 627 practicing embedded software engineers from 27 different countries are included in the survey. The survey results reveal important and interesting findings about the state of software modeling and MDE practices in the worldwide embedded software industry. Among the results: (1) Different modeling approaches (from informal sketches to formalized models) are widely used in the embedded software industry with different needs and all of the usages could be effective depending on the various modeling characteristics; (2) The majority of participants use UML; and the second most frequently selected response is “Sketch/No formal modeling language” which shows the wide-spread informal usage of modeling; (3) In model-driven approaches, it is not so important to have a graphical syntax to represent the model (as in UML) and depending on the type of target embedded industrial sector, modeling stakeholders prefer models, which can be represented in a format that is readable by a machine (as in DSL); (4) Sequence diagrams and state-machines are the two most popular diagram types; (5) Top motivations for adopting MDE are: cost savings, achieving shorter development time, reusability and quality improvement. The survey results will shed light on the state of software modeling and MDE practices and provide practical benefits to embedded software professionals
  • Article
    Citation - WoS: 7
    Citation - Scopus: 11
    Locality-Aware Task Scheduling for Homogeneous Parallel Computing Systems
    (Springer Verlag, 2018) Bhatti, Muhammad Khurram; Öz, Işıl; Amin, Sarah; Mushtaq, Maria; Farooq, Umer; Popov, Konstantin; Brorsson, Mats
    In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce execution time and energy consumption of parallel applications. Locality can be exploited at various hardware and software layers. For instance, by implementing private and shared caches in a multi-level fashion, recent hardware designs are already optimised for locality. However, this would all be useless if the software scheduling does not cast the execution in a manner that promotes locality available in the programs themselves. Since programs for parallel systems consist of tasks executed simultaneously, task scheduling becomes crucial for the performance in multi-level cache architectures. This paper presents a heuristic algorithm for homogeneous multi-core systems called locality-aware task scheduling (LeTS). The LeTS heuristic is a work-conserving algorithm that takes into account both locality and load balancing in order to reduce the execution time of target applications. The working principle of LeTS is based on two distinctive phases, namely; working task group formation phase (WTG-FP) and working task group ordering phase (WTG-OP). The WTG-FP forms groups of tasks in order to capture data reuse across tasks while the WTG-OP determines an optimal order of execution for task groups that minimizes the reuse distance of shared data between tasks. We have performed experiments using randomly generated task graphs by varying three major performance parameters, namely: (1) communication to computation ratio (CCR) between 0.1 and 1.0, (2) application size, i.e., task graphs comprising of 50-, 100-, and 300-tasks per graph, and (3) number of cores with 2-, 4-, 8-, and 16-cores execution scenarios. We have also performed experiments using selected real-world applications. The LeTS heuristic reduces overall execution time of applications by exploiting inter-task data locality. Results show that LeTS outperforms state-of-the-art algorithms in amortizing inter-task communication cost.