Computer Engineering / Bilgisayar Mühendisliği
Permanent URI for this collectionhttps://hdl.handle.net/11147/10
Browse
50 results
Search Results
Article Citation - WoS: 22Citation - Scopus: 40Correlation of Critical Success Factors With Success of Software Projects: an Empirical Investigation(Springer Verlag, 2019) Garousi, Vahid; Tarhan, Ayça; Pfahl, Dietmar; Coşkunçay, Ahmet; Demirörs, OnurSoftware engineering researchers have, over the years, proposed different critical success factors (CSFs) which are believed to be critically correlated with the success of software projects. To conduct an empirical investigation into the correlation of CSFs with success of software projects, we adapt and extend in this work an existing contingency fit model of CSFs. To archive the above objective, we designed an online survey and gathered CSF-related data for 101 software projects in the Turkish software industry. Among our findings is that the top three CSFs having the most significant associations with project success were: (1) team experience with the software development methodologies, (2) team's expertise with the task, and (3) project monitoring and controlling. A comprehensive correlation analysis between the CSFs and project success indicates positive associations between the majority of the factors and variables, however, in most of the cases at non-significant levels. By adding to the body of evidence in this field, the results of the study will be useful for a wide audience. Software managers can use the results to prioritize the improvement opportunities in their organizations w.r.t. the discussed CSFs. Software engineers might use the results to improve their skills in different dimensions, and researchers might use the results to prioritize and conduct follow-up in-depth studies on those factors.Article Citation - WoS: 6Citation - Scopus: 6Estimating Software Robustness in Relation To Input Validation Vulnerabilities Using Bayesian Networks(Springer Verlag, 2018) Ufuktepe, Ekincan; Tuğlular, TuğkanEstimating the robustness of software in the presence of invalid inputs has long been a challenging task owing to the fact that developers usually fail to take the necessary action to validate inputs during the design and implementation of software. We propose a method for estimating the robustness of software in relation to input validation vulnerabilities using Bayesian networks. The proposed method runs on all program functions and/or methods. It calculates a robustness value using information on the existence of input validation code in the functions and utilizing common weakness scores of known input validation vulnerabilities. In the case study, ten well-known software libraries implemented in the JavaScript language, which are chosen because of their increasing popularity among software developers, are evaluated. Using our method, software development teams can track changes made to software to deal with invalid inputs.Article Citation - WoS: 7Citation - Scopus: 11Locality-Aware Task Scheduling for Homogeneous Parallel Computing Systems(Springer Verlag, 2018) Bhatti, Muhammad Khurram; Öz, Işıl; Amin, Sarah; Mushtaq, Maria; Farooq, Umer; Popov, Konstantin; Brorsson, MatsIn systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce execution time and energy consumption of parallel applications. Locality can be exploited at various hardware and software layers. For instance, by implementing private and shared caches in a multi-level fashion, recent hardware designs are already optimised for locality. However, this would all be useless if the software scheduling does not cast the execution in a manner that promotes locality available in the programs themselves. Since programs for parallel systems consist of tasks executed simultaneously, task scheduling becomes crucial for the performance in multi-level cache architectures. This paper presents a heuristic algorithm for homogeneous multi-core systems called locality-aware task scheduling (LeTS). The LeTS heuristic is a work-conserving algorithm that takes into account both locality and load balancing in order to reduce the execution time of target applications. The working principle of LeTS is based on two distinctive phases, namely; working task group formation phase (WTG-FP) and working task group ordering phase (WTG-OP). The WTG-FP forms groups of tasks in order to capture data reuse across tasks while the WTG-OP determines an optimal order of execution for task groups that minimizes the reuse distance of shared data between tasks. We have performed experiments using randomly generated task graphs by varying three major performance parameters, namely: (1) communication to computation ratio (CCR) between 0.1 and 1.0, (2) application size, i.e., task graphs comprising of 50-, 100-, and 300-tasks per graph, and (3) number of cores with 2-, 4-, 8-, and 16-cores execution scenarios. We have also performed experiments using selected real-world applications. The LeTS heuristic reduces overall execution time of applications by exploiting inter-task data locality. Results show that LeTS outperforms state-of-the-art algorithms in amortizing inter-task communication cost.Conference Object Citation - Scopus: 12Incremental Itemset Mining Based on Matrix Apriori Algorithm(Springer Verlag, 2012) Oğuz, Damla; Ergenç, BelginDatabases are updated continuously with increments and re-running the frequent itemset mining algorithms with every update is inefficient. Studies addressing incremental update problem generally propose incremental itemset mining methods based on Apriori and FP-Growth algorithms. Besides inheriting the disadvantages of base algorithms, incremental itemset mining has challenges such as handling i) increments without re-running the algorithm, ii) support changes, iii) new items and iv) addition/deletions in increments. In this paper, we focus on the solution of incremental update problem by proposing the Incremental Matrix Apriori Algorithm. It scans only new transactions, allows the change of minimum support and handles new items in the increments. The base algorithm Matrix Apriori works without candidate generation, scans database only twice and brings additional advantages. Performance studies show that Incremental Matrix Apriori provides speed-up between 41% and 92% while increment size is varied between 5% and 100%.Conference Object Citation - Scopus: 6A Comprehensive Evaluation of Agile Maturity Self-Assessment Surveys(Springer Verlag, 2018) Yürüm, Ozan Raşit; Demirörs, Onur; Rabhi, FethiAgile methodologies are adapted by growing number of software organizations. Agile maturity (also called agility) assessment is a way to ascertain the degree of this adoption and determine a course of action to improve agile maturity. There are a number of agile maturity assessment surveys in order to assess team or organization agility and many of them require no guidance. However, the usability of these surveys are not widely studied. The purpose of this study is to determine available agile maturity self-assessment surveys and evaluate their strengths and weaknesses for agile maturity assessment. An extensive case study is conducted to measure the sufficiency of 22 available agile maturity self-assessment surveys according to the seven expected features: comprehensiveness, fitness for purpose, discriminativeness, objectivity, conciseness, generalizability, and suitability for multiple assessment. Our case study results show that they do not satisfy all of the expected features fully but are helpful in some degree based on the purpose of usage.Conference Object Citation - Scopus: 2Adapting Spice for Development of a Reference Model for Building Information Modeling - Bim-Carem(Springer Verlag, 2018) Yılmaz, Gökçen; Akçamete, Aslı; Demirörs, OnurBuilding Information Modelling (BIM) is highly adopted by Architecture, Engineering, Construction and Facilities Management (AEC/FM) companies around the world due to its benefits such as improving collaboration of stakeholders in projects. Effective implementation of BIM in organizations requires assessment of existing BIM performances of AEC/FM processes. We developed a reference model for BIM capability assessments based on the meta-model of the ISO/IEC 330xx (the most recent version of SPICE) family of standards. BIM-CAREM can be used for identifying the BIM capabilities of the AEC/FM processes. The model was updated iteratively based on the expert reviews and an exploratory case study, and was evaluated via four explanatory case studies. The assessment results showed that the BIM-CAREM is capable of identifying BIM capabilities of specific processes. In this paper, we present how we utilized ISO/IEC 330xx for developing BIM-CAREM as well as the iterations of the model and one of the explanatory case studies as an example.Conference Object Citation - Scopus: 14Systematic Mapping Study on Process Mining in Agile Software Development(Springer Verlag, 2018) Erdem, Sezen; Demirörs, Onur; Rabhi, FethiProcess mining is a process management technique that allows for the analysis of business processes based on the event logs and its aim is to discover, monitor and improve executed processes by extracting knowledge from event logs readily available in information systems. The popularity of agile software development methods has been increasing in the software development field over the last two decades and many software organizations develop software using agile methods. Process mining can provide complementary tools to Agile organizations for process management. Process mining can be used to discover agile processes followed by agile teams to establish the baselines and to determine the fidelity or they can be used to obtain feedback to improve agility. Despite the potential benefit of using process mining for agile software development, there is a lack of research that systematically analyzes the usage of process mining in agile software development. This paper presents a systematic mapping study on usage of process mining in agile software development approaches. The aim is to find out the usage areas of process mining in agile software development, explore commonly used algorithms, data sources, data collection mechanisms, analysis techniques and tools. The study has shown us that process mining is used in Agile software development especially for the purpose of process discovery from task tracking applications. We also observed that source code repositories are main data sources for process mining, a diversity of algorithms are used for analysis of collected data and ProM is the most widely used analysis tool for process mining.Conference Object Citation - Scopus: 1Measuring Change in Software Projects Through an Earned Value Lens(Springer Verlag, 2018) Efe, Pınar; Demirörs, Onur; Benetallah, BoualemEarned Value Management (EVM) is a common performance management tool for project management. EVM enables depicting the project progress in terms of scope, cost and schedule and provides future predictions based on trends and patterns. Even though EVM is widely used in various disciplines like manufacturing and construction, it is not common in software industry. One reason for this underutilization is the mismatch of an inherent nature of the software projects and the traditional EVM. Traditional EVM ignores change effort but it is predominant in software projects. We have developed cEVM as an extension to the traditional EVM to incorporate change and subsequent rework and evolution costs to measure earned value in software development projects more accurately. In this study, we focus on two applications of cEVM we performed to explore the usability of cEVM and to compare cEVM with traditional EVM. This paper discusses the results of the case studies as well as benefits and difficulties of cEVM.Conference Object Citation - Scopus: 3Ontology Supported Policy Modeling in Opinion Mining Process(Springer Verlag, 2012) Husaini, Mus'ab; Ko, Andrea; Tapucu, Dilek; Saygın, YücelIn e-Society the spreading services offered by Social Web has changed the way of communication and cooperation among citizens, policy-makers, governance bodies and civil society actors. One of the main goals of policymakers is to motivate citizens for participation in policy-making processes. UbiPOL ((Ubiquitous Participation Platform for Policy-making, ICT-2009.7.3(ICT for Governance and Policy Modelling), 2009-2011) aimed to develop a ubiquitous solution, which emphasizes citizens' participation in policy-making processes (PMPs) regardless of their current location and time. Ontology-based opinion mining component of Ubipol system has a crucial role in citizens' commitment, because it empowers them to contribute in policy making. This paper presents the ontology-based semi-automatic approach and tool for sentiment analysis in Ubipol system, which include lexicon extraction from a large corpus of documents. Aspect-based opinion summarization of user reviews and its combination with domain ontology development are discussed as well.Conference Object Citation - WoS: 11Citation - Scopus: 20Learning Styles Diagnosis Based on Learner Behaviors in Web Based Learning(Springer Verlag, 2009) Atman, Nilüfer; İnceoğlu, Mustafa Murat; Aslan, Burak GalipIndividuals have different backgrounds, motivation and preferences in their own learning processes. Web-based systems that ignore these differences have difficulty in meeting learners' needs effectively. One of these individual differences is the learning style. For providing adaptively incorporated learning styles, firstly learning styles of learners have to be identified. There are many different learning models in literature. This study is based on Felder and Silverman's Learning Styles Model and investigates only active/reflective and visual/verbal dimensions of this model. Instead of filling out a questionnaire, learner behaviors are analyzed with the help of literature-based approaches so that learning styles of learners can be detected.
