Computer Engineering / Bilgisayar Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/10

Browse

Search Results

Now showing 1 - 10 of 15
  • Article
    Spectral Test Generation for Boolean Expressions
    (World Scientific Publishing, 2023) Ayav, Tolga
    This paper presents a novel method for testing Boolean expressions. It is based on spectral, aka Fourier analysis of Boolean functions which is exploited to generate test inputs. The approach has three important contributions: (i) It generates a relatively small test suite with a high capability of fault detection, (ii) The test suite is prioritized such that expected fault detection time is shorter, (iii) It is entirely mathematical relying on a simple and straightforward formula. The proposed method is formulated and evaluations are performed on both synthetic and real expressions. It is also compared with two common test generation criteria, MC/DC and Minimal MUMCUT. Evaluations show that the test suite generated by the spectral approach is relatively small while expressing the capability of a better and quicker fault detection. The approach presented in this paper provides a useful insight into how spectral/Fourier analysis of Boolean functions can be exploited in software testing.
  • Article
    Citation - WoS: 2
    Citation - Scopus: 3
    Mutation-Based Minimal Test Suite Generation for Boolean Expressions
    (World Scientific Publishing, 2023) Ayav, Tolga; Belli, Fevzi
    Boolean expressions are highly involved in control flows of programs and software specifications. Coverage criteria for Boolean expressions aim at producing minimal test suites to detect software faults. There exist various testing criteria, efficiency of which is usually evaluated through mutation analysis. This paper proposes an integer programming-based minimal test suite generation technique relying on mutation analysis. The proposed technique also takes into account the cost of fault detection. The technique is optimal such that the resulting test suite guarantees to detect all the mutants under given fault assumptions, while maximizing the average percentage of fault detection of a test suite. Therefore, the approach presented can also be considered as a reference method to check the efficiency of any common technique. The method is evaluated using four well-known real benchmark sets of Boolean expressions and is also exemplary compared with MCDC criterion. The results show that the test suites generated by the proposed method provide better fault coverage values and faster fault detection.
  • Article
    Studying the Co-Evolution of Source Code and Acceptance Tests
    (World Scientific Publishing, 2023) Yalçın, Ali Görkem; Tuğlular, Tuğkan
    Testing is a vital part of achieving good-quality software. Deploying untested code can cause system crashes and unexpected behavior. To reduce these problems, testing should evolve with coding. In addition, test suites should not remain static throughout the software versions. Since whenever software gets updated, new functionalities are added, or existing functionalities are changed, test suites should be updated along with the software. Software repositories contain valuable information about the software systems. Access to older versions and differentiating adjacent versions' source code and acceptance test changes can provide information about the evolution process of the software. This research proposes a method and implementation to analyze 21 open-source real-world projects hosted on GitHub regarding the co-evolution of both software and its acceptance test suites. Related projects are retrieved from repositories, their versions are analyzed, graphs are created, and analysis related to the co-evolution process is performed. Observations show that the source code is getting updated more frequently than the acceptance tests. They indicate a pattern that source code and acceptance tests do not evolve together. Moreover, the analysis showed that a few acceptance tests test most of the functionalities that take a significant line of code.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 1
    Author Reputation Measurement on Question and Answer Sites by the Classification of Author-Generated Content
    (World Scientific Publishing, 2021) Sezerer, Erhan; Tenekeci, Samet; Acar, Ali; Baloğlu, Bora; Tekir, Selma
    In the field of software engineering, practitioners' share in the constructed knowledge cannot be underestimated and is mostly in the form of grey literature (GL). GL is a valuable resource though it is subjective and lacks an objective quality assurance methodology. In this paper, a quality assessment scheme is proposed for question and answer (Q&A) sites. In particular, we target stack overflow (SO) and stack exchange (SE) sites. We model the problem of author reputation measurement as a classification task on the author-provided answers. The authors' mean, median, and total answer scores are used as inputs for class labeling. State-of-the-art language models (BERT and DistilBERT) with a softmax layer on top are utilized as classifiers and compared to SVM and random baselines. Our best model achieves 63.8% accuracy in binary classification in SO design patterns tag and 71.6% accuracy in SE software engineering category. Superior performance in SE software engineering can be explained by its larger dataset size. In addition to quantitative evaluation, we provide qualitative evidence, which supports that the system's predicted reputation labels match the quality of provided answers.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 7
    Human-Robot Interfaces of the Neuroboscope: a Minimally Invasive Endoscopic Pituitary Tumor Surgery Robotic Assistance System
    (ASME, 2021) Dede, Mehmet İsmet Can; Kiper, Gökhan; Ayav, Tolga; Özdemirel, Barbaros; Tatlıcıoğlu, Enver; Hanalioğlu, Şahin; Işıkay, İlkay
    Endoscopic endonasal surgery is a commonly practiced minimally invasive neurosurgical operation for the treatment of a wide range of skull base pathologies including pituitary tumors. A common shortcoming of this surgery is the necessity of a third hand when the endoscope has to be handled to allow active use of both hands of the main surgeon. The robot surgery assistant NeuRoboScope system has been developed to take over the endoscope from the main surgeon's hand while providing the surgeon with the necessary means of controlling the location and direction of the endoscope. One of the main novelties of the NeuRoboScope system is its human-robot interface designs which regulate and facilitate the interaction between the surgeon and the robot assistant. The human-robot interaction design of the NeuRoboScope system is investigated in two domains: direct physical interaction (DPI) and master-slave teleoperation (MST). The user study indicating the learning curve and ease of use of the MST is given and this paper is concluded via providing the reader with an outlook of possible new human-robot interfaces for the robot assisted surgery systems.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    Catadioptric Hyperspectral Imaging, an Unmixing Approach
    (Institution of Engineering and Technology, 2020) Özışık Başkurt, Didem; Baştanlar, Yalın; Yardımcı Çetin, Yasemin
    Hyperspectral imaging systems provide dense spectral information on the scene under investigation by collecting data from a high number of contiguous bands of the electromagnetic spectrum. The low spatial resolutions of these sensors frequently give rise to the mixing problem in remote sensing applications. Several unmixing approaches are developed in order to handle the challenging mixing problem on perspective images. On the other hand, omnidirectional imaging systems provide a 360-degree field of view in a single image at the expense of lower spatial resolution. In this study, we propose a novel imaging system which integrates hyperspectral cameras with mirrors so on to yield catadioptric omnidirectional imaging systems to benefit from the advantages of both modes. Catadioptric images, incorporating a camera with a reflecting device, introduce radial warping depending on the structure of the mirror used in the system. This warping causes a non-uniformity in the spatial resolution which further complicates the unmixing problem. In this context, a novel spatial-contextual unmixing algorithm specifically for the large field of view of the hyperspectral imaging system is developed. The proposed algorithm is evaluated on various real-world and simulated cases. The experimental results show that the proposed approach outperforms compared methods.
  • Article
    Citation - WoS: 9
    Citation - Scopus: 14
    Rule-Based Automatic Question Generation Using Semantic Role Labeling
    (Institute of Electronics, Information and Communication Engineers, 2019) Keklik, Onur; Tuğlular, Tuğkan; Tekir, Selma
    This paper proposes a new rule-based approach to automatic question generation. The proposed approach focuses on analysis of both syntactic and semantic structure of a sentence. Although the primary objective of the designed system is question generation from sentences, automatic evaluation results shows that, it also achieves great performance on reading comprehension datasets, which focus on question generation from paragraphs. Especially, with respect to METEOR metric, the designed system significantly outperforms all other systems in automatic evaluation. As for human evaluation, the designed system exhibits similar performance by generating the most natural (human-like) questions.
  • Article
    Citation - Scopus: 1
    Curve Description by Histograms of Tangent Directions
    (Institution of Engineering and Technology, 2019) Köksal, Ali; Özuysal, Mustafa
    The authors propose a novel approach for the description of objects based on contours in their images using real-valued feature vectors. The approach is particularly suitable when objects of interest have high contrast and texture-free images or when the texture variations are high so textural cues are nuisance factors for classification. The proposed descriptor is suitable for nearest neighbour classification still popular in embedded vision applications when the power considerations outweigh the performance requirements. They describe object outlines purely based on the histograms of contour tangent directions mimicking many of the design heuristics of texture-based descriptors such as scale-invariant feature transform (SIFT). However, unlike SIFT and its variants, the proposed approach is directly designed to work with contour data and it is robust to variations inside and outside the object outline as well as the sampling of the contour itself. They show that relying on tangent direction estimation as opposed to gradient computation yields a more robust description and higher nearest neighbour classification rates in a variety of classification problems.
  • Article
    Citation - WoS: 1
    Citation - Scopus: 2
    Dynamic Itemset Hiding Algorithm for Multiple Sensitive Support Thresholds
    (IGI Global, 2018) Öztürk, Ahmet Cumhur; Ergenç, Belgin
    This article describes how association rule mining is used for extracting relations between items in transactional databases and is beneficial for decision-making. However, association rule mining can pose a threat to the privacy of the knowledge when the data is shared without hiding the confidential association rules of the data owner. One of the ways hiding an association rule from the database is to conceal the itemsets (co-occurring items) from which the sensitive association rules are generated. These sensitive itemsets are sanitized by the itemset hiding processes. Most of the existing solutions consider single support thresholds and assume that the databases are static, which is not true in real life. In this article, the authors propose a novel itemset hiding algorithm designed for the dynamic database environment and consider multiple itemset support thresholds. Performance comparisons of the algorithm is done with two dynamic algorithms on six different databases. Findings show that their dynamic algorithm is more efficient in terms of execution time and information loss and guarantees to hide all sensitive itemsets.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    Regression-Based Prediction for Task-Based Program Performance
    (World Scientific Publishing, 2019) Öz, Işıl; Bhatti, Muhammad Khurram; Popov, Konstantin; Brorsson, Mats
    As multicore systems evolve by increasing the number of parallel execution units, parallel programming models have been released to exploit parallelism in the applications. Task-based programming model uses task abstractions to specify parallel tasks and schedules tasks onto processors at runtime. In order to increase the efficiency and get the highest performance, it is required to identify which runtime configuration is needed and how processor cores must be shared among tasks. Exploring design space for all possible scheduling and runtime options, especially for large input data, becomes infeasible and requires statistical modeling. Regression-based modeling determines the effects of multiple factors on a response variable, and makes predictions based on statistical analysis. In this work, we propose a regression-based modeling approach to predict the task-based program performance for different scheduling parameters with variable data size. We execute a set of task-based programs by varying the runtime parameters, and conduct a systematic measurement for influencing factors on execution time. Our approach uses executions with different configurations for a set of input data, and derives different regression models to predict execution time for larger input data. Our results show that regression models provide accurate predictions for validation inputs with mean error rate as low as 6.3%, and 14% on average among four task-based programs.