Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 11
  • Article
    Citation - WoS: 9
    Citation - Scopus: 14
    Rule-Based Automatic Question Generation Using Semantic Role Labeling
    (Institute of Electronics, Information and Communication Engineers, 2019) Keklik, Onur; Tuğlular, Tuğkan; Tekir, Selma
    This paper proposes a new rule-based approach to automatic question generation. The proposed approach focuses on analysis of both syntactic and semantic structure of a sentence. Although the primary objective of the designed system is question generation from sentences, automatic evaluation results shows that, it also achieves great performance on reading comprehension datasets, which focus on question generation from paragraphs. Especially, with respect to METEOR metric, the designed system significantly outperforms all other systems in automatic evaluation. As for human evaluation, the designed system exhibits similar performance by generating the most natural (human-like) questions.
  • Article
    Citation - Scopus: 1
    Curve Description by Histograms of Tangent Directions
    (Institution of Engineering and Technology, 2019) Köksal, Ali; Özuysal, Mustafa
    The authors propose a novel approach for the description of objects based on contours in their images using real-valued feature vectors. The approach is particularly suitable when objects of interest have high contrast and texture-free images or when the texture variations are high so textural cues are nuisance factors for classification. The proposed descriptor is suitable for nearest neighbour classification still popular in embedded vision applications when the power considerations outweigh the performance requirements. They describe object outlines purely based on the histograms of contour tangent directions mimicking many of the design heuristics of texture-based descriptors such as scale-invariant feature transform (SIFT). However, unlike SIFT and its variants, the proposed approach is directly designed to work with contour data and it is robust to variations inside and outside the object outline as well as the sampling of the contour itself. They show that relying on tangent direction estimation as opposed to gradient computation yields a more robust description and higher nearest neighbour classification rates in a variety of classification problems.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 2
    Dynamic Itemset Hiding Algorithm for Multiple Sensitive Support Thresholds
    (IGI Global, 2018) Öztürk, Ahmet Cumhur; Ergenç, Belgin
    This article describes how association rule mining is used for extracting relations between items in transactional databases and is beneficial for decision-making. However, association rule mining can pose a threat to the privacy of the knowledge when the data is shared without hiding the confidential association rules of the data owner. One of the ways hiding an association rule from the database is to conceal the itemsets (co-occurring items) from which the sensitive association rules are generated. These sensitive itemsets are sanitized by the itemset hiding processes. Most of the existing solutions consider single support thresholds and assume that the databases are static, which is not true in real life. In this article, the authors propose a novel itemset hiding algorithm designed for the dynamic database environment and consider multiple itemset support thresholds. Performance comparisons of the algorithm is done with two dynamic algorithms on six different databases. Findings show that their dynamic algorithm is more efficient in terms of execution time and information loss and guarantees to hide all sensitive itemsets.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    Regression-Based Prediction for Task-Based Program Performance
    (World Scientific Publishing, 2019) Öz, Işıl; Bhatti, Muhammad Khurram; Popov, Konstantin; Brorsson, Mats
    As multicore systems evolve by increasing the number of parallel execution units, parallel programming models have been released to exploit parallelism in the applications. Task-based programming model uses task abstractions to specify parallel tasks and schedules tasks onto processors at runtime. In order to increase the efficiency and get the highest performance, it is required to identify which runtime configuration is needed and how processor cores must be shared among tasks. Exploring design space for all possible scheduling and runtime options, especially for large input data, becomes infeasible and requires statistical modeling. Regression-based modeling determines the effects of multiple factors on a response variable, and makes predictions based on statistical analysis. In this work, we propose a regression-based modeling approach to predict the task-based program performance for different scheduling parameters with variable data size. We execute a set of task-based programs by varying the runtime parameters, and conduct a systematic measurement for influencing factors on execution time. Our approach uses executions with different configurations for a set of input data, and derives different regression models to predict execution time for larger input data. Our results show that regression models provide accurate predictions for validation inputs with mean error rate as low as 6.3%, and 14% on average among four task-based programs.
  • Article
    Citation - WoS: 9
    Citation - Scopus: 13
    Training Cnns With Image Patches for Object Localisation
    (Institution of Engineering and Technology, 2018) Orhan, Semih; Baştanlar, Yalın
    Recently, convolutional neural networks (CNNs) have shown great performance in different problems of computer vision including object detection and localisation. A novel training approach is proposed for CNNs to localise some animal species whose bodies have distinctive patterns such as leopards and zebras. To learn characteristic patterns, small patches which are taken from different body parts of animals are used to train models. To find object location, in a test image, all locations are visited in a sliding window fashion. Crops are fed into trained CNN and their classification scores are combined into a heat map. Later on, heat maps are converted to bounding box estimates for varying confidence scores. The localisation performance of the patch-based training approach is compared with Faster R-CNN – a state-of-the-art CNN-based object detection and localisation method. Experimental results reveal that the patch-based training outperforms Faster R-CNN, especially for classes with distinctive patterns.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    Extended Adaptive Join Operator With Bind-Bloom Join for Federated Sparql Queries
    (IGI Global Publishing, 2017) Oğuz, Damla; Yin, Shaoyi; Ergenç, Belgin; Hameurlain, Abdelkader; Dikenelli, Oğuz
    The goal of query optimization in query federation over linked data is to minimize the response time and the completion time. Communication time has the highest impact on them both. Static query optimization can end up with inefficient execution plans due to unpredictable data arrival rates and missing statistics. This study is an extension of adaptive join operator which always begins with symmetric hash join to minimize the response time, and can change the join method to bind join to minimize the completion time. The authors extend adaptive join operator with bind-bloom join to further reduce the communication time and, consequently, to minimize the completion time. They compare the new operator with symmetric hash join, bind join, bind-bloom join, and adaptive join operator with respect to the response time and the completion time. Performance evaluation shows that the extended operator provides optimal response time and further reduces the completion time. Moreover, it has the adaptation ability to different data arrival rates.
  • Article
    Citation - WoS: 7
    Citation - Scopus: 9
    Input Contract Testing of Graphical User Interfaces
    (World Scientific Publishing Co. Pte Ltd, 2016) Tuğlular, Tuğkan; Belli, Fevzi; Linschulte, Michael
    User inputs are critical for the security, safety, and reliability of software systems. This paper proposes a new concept called user input contracts, which is an integral part of a design-by-contract supplemented development process, and a model-based testing approach to detect violations of user input contracts. The approach generates test cases from an input contract integrated with graph-based model of user interface specification and applies them to the system under consideration. The paper presents a proof-of-concept tool that has been developed and used to validate the approach by experiments. The experiments are conducted on a web-based system for marketing tourist services to analyze input robustness of system under consideration with respect to user input contracts.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    Accounting for Product Similarity in Software Project Duration Estimation
    (World Scientific Publishing Co. Pte Ltd, 2016) Taştekin, Semra Yılmaz; Erten, Yusuf Murat; Bilgen, Semih
    We extend an existing model proposed for estimating project duration for industrial projects in general, to software intensive systems projects. We show, through nine different cases studies from different sectors, that product similarity, measured in terms of requirements reuse, can be incorporated into that model to improve its applicability in software intensive systems projects.
  • Article
    Citation - WoS: 4
    Citation - Scopus: 8
    Prioritizing Mcdc Test Cases by Spectral Analysis of Boolean Functions
    (John Wiley and Sons Inc., 2017) Ayav, Tolga
    Test case prioritization aims at scheduling test cases in an order that improves some performance goal. One performance goal is a measure of how quickly faults are detected. Such prioritization can be performed by exploiting the Fault Exposing Potential (FEP) parameters associated to the test cases. FEP is usually approximated by mutation analysis under certain fault assumptions. Although this technique is effective, it could be relatively expensive compared to the other prioritization techniques. This study proposes a cost-effective FEP approximation for prioritizing Modified Condition Decision Coverage (MCDC) test cases. A strict negative correlation between the FEP of a MCDC test case and the influence value of the associated input condition allows to order the test cases easily without the need of an extensive mutation analysis. The method is entirely based on mathematics and it provides useful insight into how spectral analysis of Boolean functions can benefit software testing.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    Model-Based Contract Testing of Graphical User Interfaces
    (Institute of Electronics, Information and Communication Engineers, 2015) Tuğlular, Tuğkan; Linschulte, Michael; Belli, Fevzi; Müftüoğlu, Arda
    Graphical User Interfaces (GUIs) are critical for the security, safety and reliability of software systems. Injection attacks, for instance via SQL, succeed due to insufficient input validation and can be avoided if contract-based approaches, such as Design by Contract, are followed in the software development lifecycle of GUIs. This paper proposes a model-based testing approach for detecting GUI data contract violations, which may result in serious failures such as system crash. A contract-based model of GUI data specifications is used to develop test scenarios and to serve as test oracle. The technique introduced uses multi terminal binary decision diagrams, which are designed as an integral part of decision tableaugmented event sequence graphs, to implement a GUI testing process. A case study, which validates the presented approach on a port scanner written in Java programming language, is presented. Copyright © 2015 The Institute of Electronics, Information and Communication Engineers.