Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 6 of 6
  • Article
    Citation - WoS: 12
    Citation - Scopus: 21
    A Change Management Model and Its Application in Software Development Projects
    (Elsevier, 2019) Efe, Pınar; Demirörs, Onur
    Change is inevitable in software projects and software engineers strive to find ways to manage changes. A complete task could be easily in a team's agenda sometime later due to change demands. Change demands are caused by failures and/or improvements and require additional effort which in most cases have not been planned upfront and affect project progress significantly. Earned Value Management (EVM) is a powerful performance management and feedback tool for project management. EVM depicts the project progress in terms of scope, cost, and schedule and provides future predictions based on trends and patterns of the past. Even though EVM works quite well and widely used in disciplines like construction and mining, it is not the case for software discipline. Software projects require special attention and adoption for change. In this study, we present a model to measure change and subsequent rework and evolution costs to monitor software projects accurately. We have performed five case studies in five different companies to explore the usability of the proposed model. This paper depicts the proposed model and discusses the results of the case studies.
  • Article
    Citation - WoS: 7
    Citation - Scopus: 11
    Locality-Aware Task Scheduling for Homogeneous Parallel Computing Systems
    (Springer Verlag, 2018) Bhatti, Muhammad Khurram; Öz, Işıl; Amin, Sarah; Mushtaq, Maria; Farooq, Umer; Popov, Konstantin; Brorsson, Mats
    In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce execution time and energy consumption of parallel applications. Locality can be exploited at various hardware and software layers. For instance, by implementing private and shared caches in a multi-level fashion, recent hardware designs are already optimised for locality. However, this would all be useless if the software scheduling does not cast the execution in a manner that promotes locality available in the programs themselves. Since programs for parallel systems consist of tasks executed simultaneously, task scheduling becomes crucial for the performance in multi-level cache architectures. This paper presents a heuristic algorithm for homogeneous multi-core systems called locality-aware task scheduling (LeTS). The LeTS heuristic is a work-conserving algorithm that takes into account both locality and load balancing in order to reduce the execution time of target applications. The working principle of LeTS is based on two distinctive phases, namely; working task group formation phase (WTG-FP) and working task group ordering phase (WTG-OP). The WTG-FP forms groups of tasks in order to capture data reuse across tasks while the WTG-OP determines an optimal order of execution for task groups that minimizes the reuse distance of shared data between tasks. We have performed experiments using randomly generated task graphs by varying three major performance parameters, namely: (1) communication to computation ratio (CCR) between 0.1 and 1.0, (2) application size, i.e., task graphs comprising of 50-, 100-, and 300-tasks per graph, and (3) number of cores with 2-, 4-, 8-, and 16-cores execution scenarios. We have also performed experiments using selected real-world applications. The LeTS heuristic reduces overall execution time of applications by exploiting inter-task data locality. Results show that LeTS outperforms state-of-the-art algorithms in amortizing inter-task communication cost.
  • Article
    Citation - WoS: 16
    Citation - Scopus: 26
    Application of a Software Agility Assessment Model – Agilitymod in the Field
    (Elsevier Ltd., 2019) Özcan Top, Özden; Demirörs, Onur
    Adoption of agile values and principles and transformation of organizations towards agility are not easy and straightforward. Misinterpretation of agile principles and values, and adoption of partial solutions with few agile practices instead of holistic approaches prevent organizations to obtain full benefits of agile methods. We developed the Software Agility Assessment Reference Model (AgilityMod) for the appraisal of software projects from agility perspective and to provide guidance on specifying gaps on the road towards agility (agile maturity). The meta-model of AgilityMod was defined in relation with the ISO/IEC 15504-Process Assessment Model. AgilityMod was developed in an iterative and incremental manner by running successive case studies and getting opinions of experts for the evaluation and improvement of the Model. The multiple case study that we present here in detail included the implementation of the Model in eight software development companies. The results of this case study were evaluated by the case study participants. According to the significant majority of the case study participants, AgilityMod achieves its purpose.
  • Article
    Citation - WoS: 11
    Citation - Scopus: 15
    A Simplified Two-View Geometry Based External Calibration Method for Omnidirectional and Ptz Camera Pairs
    (Elsevier Ltd., 2016) Baştanlar, Yalın
    The external calibration of a camera system is essential for most of the applications that involve an omnidirectional and a pan-tilt-zoom (PTZ) camera. The methods in the literature fall into two major categories; (1) a complete external calibration of the system which allows all degrees of freedom but highly time consuming, (2) spatial mapping between the pixel coordinates in omnidirectional camera and pan/tilt angles of the PTZ camera instead of explicitly computing the rotation and translation. Most methods in this category make restrictive assumptions about the camera setup such as optical axes of the cameras coincide. We propose an external calibration method that is effective and practical. Using the two-view geometry principles and making reasonable assumptions about the camera setup, calibration is performed with just two scene points. We extract rotation using the point correspondences in images. Locating the PTZ camera in the omnidirectional image is used to find the translation parameters and the real distance between the two scene points lets us compute the translation in correct scale. Results of the simulated and real image experiments show that our method works effectively in real world cases and its accuracy is comparable to the state-of-the-art methods.
  • Article
    Citation - WoS: 24
    Citation - Scopus: 31
    Dynamic Replication Strategies in Data Grid Systems: A Survey
    (Springer Verlag, 2015) Tos, Uras; Mokadem, Riad; Hameurlain, Abdelkader; Ayav, Tolga; Bora, Şebnem
    In data grid systems, data replication aims to increase availability, fault tolerance, load balancing and scalability while reducing bandwidth consumption, and job execution time. Several classification schemes for data replication were proposed in the literature, (i) static vs. dynamic, (ii) centralized vs. decentralized, (iii) push vs. pull, and (iv) objective function based. Dynamic data replication is a form of data replication that is performed with respect to the changing conditions of the grid environment. In this paper, we present a survey of recent dynamic data replication strategies. We study and classify these strategies by taking the target data grid architecture as the sole classifier. We discuss the key points of the studied strategies and provide feature comparison of them according to important metrics. Furthermore, the impact of data grid architecture on dynamic replication performance is investigated in a simulation study. Finally, some important issues and open research problems in the area are pointed out.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 9
    Implementing Fault-Tolerance in Real-Time Programs by Automatic Program Transformations
    (Association for Computing Machinery (ACM), 2008) Ayav, Tolga; Fradet, Pascal; Girault, Alain
    We present a formal approach to implement fault-tolerance in real-time embedded systems. The initial fault-intolerant system consists of a set of independent periodic tasks scheduled onto a set of fail-silent processors connected by a reliable communication network. We transform the tasks such that, assuming the availability of an additional spare processor, the system tolerates one failure at a time (transient or permanent). Failure detection is implemented using heartbeating, and failure masking using checkpointing and rollback. These techniques are described and implemented by automatic program transformations on the tasks' programs. The proposed formal approach to fault-tolerance by program transformations highlights the benefits of separation of concerns. It allows us to establish correctness properties and to compute optimal values of parameters to minimize fault-tolerance overhead. We also present an implementation of our method, to demonstrate its feasibility and its efficiency.