Püskülcü, Halis

Loading...
Name Variants
Puskulcu, H.
Püskülcü, H.
Puskulcu, H
Puskulcu, Halis
Püskülcü, H
Job Title
Email Address
Main Affiliation
03.04. Department of Computer Engineering
Status
Former Staff
Website
ORCID ID
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID

Sustainable Development Goals

NO POVERTY1
NO POVERTY
0
Research Products
ZERO HUNGER2
ZERO HUNGER
0
Research Products
GOOD HEALTH AND WELL-BEING3
GOOD HEALTH AND WELL-BEING
1
Research Products
QUALITY EDUCATION4
QUALITY EDUCATION
1
Research Products
GENDER EQUALITY5
GENDER EQUALITY
0
Research Products
CLEAN WATER AND SANITATION6
CLEAN WATER AND SANITATION
0
Research Products
AFFORDABLE AND CLEAN ENERGY7
AFFORDABLE AND CLEAN ENERGY
0
Research Products
DECENT WORK AND ECONOMIC GROWTH8
DECENT WORK AND ECONOMIC GROWTH
0
Research Products
INDUSTRY, INNOVATION AND INFRASTRUCTURE9
INDUSTRY, INNOVATION AND INFRASTRUCTURE
8
Research Products
REDUCED INEQUALITIES10
REDUCED INEQUALITIES
0
Research Products
SUSTAINABLE CITIES AND COMMUNITIES11
SUSTAINABLE CITIES AND COMMUNITIES
0
Research Products
RESPONSIBLE CONSUMPTION AND PRODUCTION12
RESPONSIBLE CONSUMPTION AND PRODUCTION
0
Research Products
CLIMATE ACTION13
CLIMATE ACTION
0
Research Products
LIFE BELOW WATER14
LIFE BELOW WATER
0
Research Products
LIFE ON LAND15
LIFE ON LAND
0
Research Products
PEACE, JUSTICE AND STRONG INSTITUTIONS16
PEACE, JUSTICE AND STRONG INSTITUTIONS
0
Research Products
PARTNERSHIPS FOR THE GOALS17
PARTNERSHIPS FOR THE GOALS
0
Research Products
Documents

2

Citations

98

h-index

2

This researcher does not have a WoS ID.
Scholarly Output

11

Articles

2

Views / Downloads

9631/3979

Supervised MSc Theses

9

Supervised PhD Theses

0

WoS Citation Count

96

Scopus Citation Count

98

Patents

0

Projects

0

WoS Citations per Publication

8.73

Scopus Citations per Publication

8.91

Open Access Source

11

Supervised Theses

9

JournalCount
Journal of Nuclear Medicine2
Current Page: 1 / 1

Scopus Quartile Distribution

Competency Cloud

GCRIS Competency Cloud

Scholarly Output Search Results

Now showing 1 - 10 of 11
  • Master Thesis
    Categorization of Web Sites in Turkey With Svm
    (Izmir Institute of Technology, 2004) Şimşek, Kadir; Püskülcü, Halis
    In this study of topic .Categorization of Web Sites in Turkey with SVM. after a brief introduction to what the World Wide Web is and a more detailed description of text categorization and web site categorization concepts, categorization of web sites including all prerequisites for classification task takes part. As an information resource the web has an undeniable importance in human life. However the huge structure of the web and its uncontrolled growth led to new information retrieval research areas to be risen in last years. Web mining, the general name of these studies, investigates activities and structures on the web to automatically discover and gather meaningful information from the web documents. It consists of three subfields: .Web Structure Mining., .Web Content Mining. and .Web Usage Mining.. In this project, web content mining concept was applied on the web sites in Turkey during the categorization process. Support Vector Machine, a supervised learning method based on statistics and principle of structural risk minimization is used as the machine learning technique for web site categorization. This thesis is intended to draw a conclusion about web site distributions with respect to thematic categorization based on text. The popular web directory Yahoo.s 12 top level categories were used in this project. Beside of the main purpose, we gathered several statistical descriptive informations about web sites and contents used in html pages. Metatag usage percentages, html design structures and plug-in usage are some of these information. The processes taken through solution, start with employing a web downloader which downloads web page contents and other information such as frame content from each web site. Next, manipulating, parsing and simplifying the downloaded documents takes place. At this point, preperations for categorization task are completed. Then, by applying Support Vector Machine (SVM) package SVMLight developed by Thorsten Joachims, web sites are classified under given categories. The classification results obtained in the last section show that there are some over-lapping categories exist and accuracy and precision values are between 60-80. In addition to categorization results, we saw that almost 17 of web sites utilize html frames and 9367 web sites include metakeywords.
  • Master Thesis
    Improvements in K-Means Algorithm To Execute on Large Amounts of Data
    (Izmir Institute of Technology, 2004) Sülün, Erhan; Püskülcü, Halis
    By the help of large storage capacities of current computer systems, datasets of companies has expanded dramatically in recent years. Rapid growth of current companies. databases has raised the need of faster data mining algorithms as time is very critical for those companies.Large amounts of datasets have historical data about the transactions of companies which hold valuable hidden patterns which can provide competitive advantage to them. As time is also very important for these companies, they need to mine these huge databases and make accurate decisions in short durations in order to gain marketing advantage. Therefore, classical data mining algorithms need to be revised such that they discover hidden patterns and relationships in databases in shorter durations.In this project, K-means data mining algorithm has been proposed to be improved in performance in order to cluster large datasets in shorter time. Algorithm is decided to be improved by using parallelization. Parallelization of the algorithm has been considered to be a suitable solution as the popular way of increasing computation power is to connect computers and execute algorithms simultaneously on network of computers. This popularity also increases the availability of parallel computation clusters day by day. Parallel version of the K-means algorithm has been designed and implemented by using C language. For the parallelisation, MPI (Message Passing Interface) library hasbeen used. Serial algorithm has also been implemented by using C language for the purpose of comparison. And then, algorithms have been run for several times under same conditions and results have been discussed. Summarized results of these executions by using tables and graphics has showed that parallelization of the K-means algorithm has provied a performance gain almost proportional by the count of computers used for parallel execution.
  • Article
    Citation - WoS: 72
    Citation - Scopus: 75
    The Effects of Estrogen, Progesterone, and C-Erbb Receptor States on 18f-Fdg Uptake of Primary Breast Cancer Lesions
    (Society of Nuclear Medicine and Molecular Imaging, 2007) Mavi, Ayşe; Çermik, Tevfik F.; Urhan, Muammer; Püskülcü, Halis; Basu, Sandip; Yu, Jian Q.; Zhuang, Hongming; Czerniecki, Brian; Alavi, Abass
    The purpose of this prospective study was to investigate whether correlations exist between 18F-FDG uptake of primary breast cancer lesions and predictive and prognostic factors such as estrogen receptor (ER), progesterone receptor (PR), and C-erbB-2 receptor (C-erbB-2R) states. Methods: Before undergoing partial or total mastectomy, 213 patients with newly diagnosed breast cancer underwent 18F-FDG PET (5.2 MBq/kg of body weight). The maximum standardized uptake value (SUV) of the primary lesion was measured in each patient. Standard immunohistochemistry was performed on a surgical specimen of the cancer lesion to characterize the receptor state of the tumor cells. Pearson χ2 tests were performed on the cross-tables of different receptor states to test any association that may exist among ER, PR, and C-erbB-2R. Maximum SUV measurements for different receptor states were compared using factorial ANOVA in a completely random design. Results: After exclusion of certain lesions, 118 lesions were analyzed for this study. The mean maximum SUVs of ER-positive and ER-negative lesions were 3.03 ± 0.26 and 5.64 ± 0.75, whereas those of PR were 3.24 ± 0.29 and 4.89 ± 0.67, respectively, and those of C-erbB-2R were 4.64 ± 0.70 and 3.70 ± 0.35, respectively, χ2 tests for ER and PR showed that if one is positive then the other tends to be positive as well (χ2 = 71.054, P < 0.01). For ER and C-erbB-2R states, if ER is positive, C-erbB-2R will more likely be negative (χ2 = 13.026, P < 0.01). No relationship was detected between PR and C-erbB-2R states (χ2 = 3.695, P > 0.05). ANOVAs showed that PR state alone (F = 0.095, P > 0.05) and C-erbB-2R state alone (F = 0.097, P > 0.05) had no effect on 18F-FDG uptake but ER state alone had an effect (F = 9.126, P < 0.01). ER and PR being together had no additional effect on 18F-FDG uptake. Our study also demonstrated that interactions exist between ER and C-erbB-2R state and between PR and C-erbB-2R state. Conclusion: SUV measurements may provide valuable information about the state of ER, PR, and C-erbB-2R and the associated glucose metabolism as measured by 18F-FDG uptake of the primary breast cancer lesions. Such an association may be of importance to treatment planning and outcome in these patients.
  • Article
    Citation - WoS: 24
    Citation - Scopus: 23
    The Effect of Age, Menopausal State, and Breast Density on 18f-Fdg Uptake in Normal Glandular Breast Tissue
    (Society of Nuclear Medicine and Molecular Imaging, 2010) Mavi, Ayşe; Çermik, Tevfik F.; Urhan, Muammer; Püskülcü, Halis; Basu, Sandip; Cucchiara, Andrew J.; Yu, Jian Q.; Alavi, Abass
    Theoretically, the degree of 18F-FDG uptake in the glandular tissues of the normal breast can affect the detection of breast cancer. The aim of this prospective study was to investigate relationships among age, menopausal state, and breast density and determine whether they affect 18F-FDG uptake in normal glandular breast tissue. Methods: Among 250 newly diagnosed breast cancer patients, 149 patients (mean age ± SD, 50.9 ± 9.70 y; range, 32-77 y) were analyzed because they had normal contralateral breasts confirmed by MRI, mammography, and 18F-FDG PET examinations. PET images were acquired 60 ± 2 min after the administration of 18F-FDG (5.2 MBq/kg of body weight). The maximum and average standardized uptake value (SUVmax and SUVavg, respectively) of 18F-FDG were calculated in the normal breast. Patients were divided into groups according to qualitative breast density and menopausal state. Descriptive statistics and 2-factorial analysis of covariance were used to assess the effects of qualitative breast density, menopausal state, and age on SUVmax and SUVavg. Pearson χ2 was used to test the relationship between menopausal state and qualitative breast density. Results: The average age of patients with nondense breasts was significantly higher than that of patients with dense breasts (P < 0.01). Also, breast density related to menopausal state (P < 0.05). Dense breasts had an average SUVmax of 1.243 and mean SUVavg of 0.694, whereas nondense breasts had a mean SUVmax of 0.997 and mean SUVavg of 0.592. Analysis of covariance indicated that density and the linear effect of age were significant with regard to both SUVmax and SUVavg. After removing the linear effect of age, menopausal state had no effect on SUVmax and SUVavg. Conclusion: 18F-FDG uptake significantly decreases as age increases and breast density decreases. Age and qualitative breast density are independent factors and significantly affect 18F-FDG uptake for both SUVmax and SUVavg. Menopausal state had no effect on SUVmax and SUVavg. Copyright © 2010 by the Society of Nuclear Medicine, Inc.
  • Master Thesis
    Mining Xml Documents With Association Rule Algorithms
    (Izmir Institute of Technology, 2008) Gürel, Görkem; Püskülcü, Halis
    Following the increasing use of XML technology for data storage and data exchange between applications, the subject of mining XML documents has become more researchable and important topic. In this study, we considered the problem of Mining Association Rules between items in XML document. The principal purpose of this study is applying association rule algorithms directly to the XML documents with using XQuery which is a functional expression language that can be used to query or process XML data. We used three different algorithms; Apriori, AprioriTid and High Efficient AprioriTid. We give comparisons of mining times of these three apriori-like algorithms on XML documents using different support levels, different datasets and different dataset sizes.
  • Master Thesis
    Image Classifcation by Means of Pattern Recognition Techniques
    (Izmir Institute of Technology, 1997) Güzel, Cumhur; Püskülcü, Halis
    Image classification plays an important role in many computer vision tasks such as surface inspection, shape determination etc. Various 2-D image classification techniques are investigated, assessed and a computational method to classifY the 2-D X-ray images is developed and evaluated in this study. Various pattern recognition techniques are devised for the solution of the image classification. Those techniques may be divided into mainly two groups. First one, is mathematical and statistical model based, second one, is the artificial neural network based techniques. We have concentrated on artificial neural network techniques. In the experiments, both techniques were applied for the classification of the VUR (vesico ureteral reflux) images, in this study. However, according to the experiments performed on VUR case study, neural network technique was more successful than others, in terms of classifier. A hybrid method is proposed in this study, rather than pure artificial neural network solution. Representation of images is performed via transformation invariant mathematical structure called Fourier Descriptors and these structures are used as input to train the neural network for the classification part.The application is performed as follows: Feature extraction is performed first, then extracted features are used as pattern vectors for training the neural network. Representation of the shapes in X-ray images is performed by using Fourier Descriptors. Usage of Fourier descriptors as a method of representation of the shapes, provides the transformation invariant' (translation, rotation, scaling invariant structure) representation of X-ray images. These new vector representation is fed to the neural network. Backpropagation is used as a training algorithm. After training is finished, system is readyfor questioning. The minimum-mean-distance and nearest neighbor rules are also applied for the pattern vectors generated for the experiments. But the multilayer perceptron trained by backpropagation outperforms both of these statistical classifiers.
  • Master Thesis
    Recent developments in wireless network systems
    (Izmir Institute of Technology, 2001) Nurel, Mehmet Ali; Püskülcü, Halis
    Over last decades, a tremendous growth in wireless communication systems could be seen. This great success of wireless telephony and messaging systems, nowadays, is beginning to be applied to the world of personal and business computing systems. As a result, instead of facing with the some problems ör restrictions of the wired networks, people could access and share information nearly anywhere they want on the earth.This study is aimed to present the various aspects of wireless systems. in order to do this the history of wireless network systems is introduced, some defınitions are extracted, the answers to the some basic questions on why and where wireless networks can be applied are given and the definitions of protocols that have been developed are presented. Moreover, the benefıts of wireless networks, related confıgurations and standards, several underlying technologies, and some markets related with wireless systems are described. Comparisons between wireless and wired networks and among each other are also given.
  • Master Thesis
    Comparison of Different Algorithms for Exploting the Hidden Trends in Data Sources
    (Izmir Institute of Technology, 2003) Özsevim, Emrah; Püskülcü, Halis
    The growth of large-scale transactional databases, time-series databases and other kinds of databases has been giving rise to the development of several efficient algorithms that cope with the computationally expensive task of association rule mining.In this study, different algorithms, Apriori, FP-tree and CHARM, for exploiting the hidden trends such as frequent itemsets, frequent patterns, closed frequent itemsets respectively, were discussed and their performances were evaluated. The perfomances of the algorithms were measured at different support levels, and the algorithms were tested on different data sets (on both synthetic and real data sets). The algorihms were compared according to their, data preparation performances, mining performance, run time performances and knowledge extraction capabilities.The Apriori algorithm is the most prevalent algorithm of association rule mining which makes multiple passes over the database aiming at finding the set of frequent itemsets for each level. The FP-Tree algorithm is a scalable algorithm which finds the crucial information as regards the complete set of prefix paths, conditional pattern bases and frequent patterns by using a compact FP-Tree based mining method. The CHARM is a novel algorithm which brings remarkable improvements over existing association rule mining algorithms by proving the fact that mining the set of closed frequent itemsets is adequate instead of mining the set of all frequent itemsets.Related to our experimental results, we conclude that the Apriori algorithm demonstrates a good performance on sparse data sets. The Fp-tree algorithm extracts less association in comparison to Apriori, however it is completelty a feasable solution that facilitates mining dense data sets at low support levels. On the other hand, the CHARM algorithm is an appropriate algorithm for mining closed frequent itemsets (a substantial portion of frequent itemsets) on both sparse and dense data sets even at low levels of support.
  • Master Thesis
    Heuristic Container Placement Algorithms
    (Izmir Institute of Technology, 2003) Aslan, Burak Galip; Püskülcü, Halis
    With the growth of transportation over sea; defining transportation processes in a better way and finding ways to make transportation processes more effective have become one of the most important research areas of today. Especially in the last quartet of the previous decade, the computers had become much powerful tools with their impressive amount of data processing cababilites. It was imminent that computers had begun taking serious roles in the system development studies. As a result; constructing models for the processes in container terminals and processing the data with the computers create opportunities for the automation of various processes in container terminals. The final step of these studies is the full automation of terminal activities with software packages that combine various functions focused on various processes in a single system.This study is about a project that had been made for a container terminal owned by a special company. During this study; there had been discussions with experts about the subject, and container handling processes in the terminal had been analyzed in order to define the main structure of the yard management software to be created.This study focuses on the container handling activities over the yard space so as to create a basis for a computer system that will take part in the decisions during the container operations. Object oriented analysis and design methods are used for the definition of the system that will help the decisions in the yard operations. The optimization methodology that will be the core of the container placement decisions is based on using different placement patterns and placement algorithms for different conditions. These placement patterns and algorithms are constructed due to the container handling machinery that was being used in the terminal that this study has been made for.
  • Master Thesis
    Finding and Evaluating Patterns in Web Repository Using Database Technology and Data Mining Algorithms
    (Izmir Institute of Technology, 2002) Özakar, Belgin; Püskülcü, Halis
    Web mining is a very hot research topic, which combines two of the active research areas: Data Mining and World Wide Web. The Web mining research relates to several research communities such as Database, Statistics, Artificial Intelligence and Visualization. Although there exists some confusion about the Web mining, the most recognized approach is to categorize Web mining into three areas: Web content mining, Web structure mining, and Web usage mining. Web content mining focuses on the discovery/retrieval of the useful information from the Web contents/data/documents, while the Web structure mining emphasizes to the discovery of how to model the underlying link structures of the Web. Sometimes the distinction between these two categories is not very clear. Web usage mining is relatively independent, but not isolated category, in which the following studies continue; General Web Usage Mining, Site Modification, Systems Improvement and Personalization. General Web Usage Mining systems aim to discover general trends and patterns from the log files by adapting data mining techniques. The objective of the Site Modification systems is to improve the design of a web site by suggesting modifications in its content and structure. The research on System Improvement focuses on using the web usage mining for improving the web traffic. Finally, personalization systems aim to understand individual trends used for personalizing the web sites. The study subject to this thesis, IYTE Web Usage Mining (WUM) System was an example of system development in the field of General Web Usage Mining with a database approach where the flexible query capability of SQL (Structured Query Language) was explored. The data mining and database techniques were applied on the access/error/user logs of the web server of Izmir Institute of Technology. The main objective was to create a site improvement tool for the web administrator by reporting the distribution of the hits received by the web server according to the time stamp, users, service and URL types and at the same time revealing the nature of the errors generated by the web server. All data cleaning and transaction identification processes were handled by the software routines coded in Java. Clean transactions were imported into IYTE Web Usage Mining (IYTE WUM) relational database. Flexible features of SQL were utilized for application of algorithm Apriori to discover most frequent pair of URL s visited, in addition to extraction of general knowledge from data.