Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection

Permanent URI for this collectionhttps://hdl.handle.net/11147/7148

Browse

Search Results

Now showing 1 - 10 of 297
  • Article
    Citation - Scopus: 27
    Apoptotic Effects of Resveratrol, a Grape Polyphenol, Onimatinib-Sensitive and Resistant K562 Chronic Myeloid Leukemia Cells
    (2012) Can,G.; Cakir,Z.; Kartal,M.; Gunduz,U.; Baran,Y.
    Aim: To examine the antiproliferative and apoptotic effects of resveratrol on imatinib-sensitive and imatinib-resistant K562 chronic myeloid leukemia cells. Materials and Methods: Antiproliferative effects of resveratrol were determined by the 3-Bis[2-methoxy-4-nitro-5-sulphophenyl]-2H-tetrazolium-5- carboxanilide inner salt (XTT) cell proliferation assay. Apoptotic effects of resveratrol on sensitive K562 and resistant K562/IMA-3 cells were determined through changes in caspase-3 activity, loss of mitochondrial membrane potential (MMP), and apoptosis by annexin V-(FITC). Results: The concentrations of resveratrol that inhibited cell growth by 50% ( IC50) were calculated as 85 and 122 μM for K562 and K562/IMA-3 cells, respectively. There were 1.91-, 7.42- and 14.73-fold increases in loss of MMP in K562 cells treated with 10, 50, and 100 μM resveratrol, respectively. The same concentrations of resveratrol resulted in 2.21-, 3.30- and 7.65-fold increases in loss of MMP in K562/IMA-3 cells. Caspase-3 activity increased 1.04-, 2.77- and 4.8-fold in K562 and 1.02-, 1.41- and 3.46-fold in K562/IMA- 3 cells in response to the same concentrations of resveratrol, respectively. Apoptosis was induced in 58.7%-and 43.3% of K562 and K562/IMA-3 cells, respectively, in response to 100 μM resveratrol. Conclusion: Taken together these results may suggest potential use of resveratrol in CML, as well as in patients with primary and/or acquired resistance to imatinib.
  • Article
    Citation - Scopus: 1
    Performance Indices of Soft Computing Models To Predict the Heat Load of Buildings in Terms of Architectural Indicators
    (Yildiz Technical University, 2016) Turhan,C.; Kazanasmaz,T.; Akkurt,G.G.
    This study estimates the heat load of buildings in Izmir/Turkey by three soft computing (SC) methods; Artificial Neural Networks (ANNs), Fuzzy Logic (FL) and Adaptive Neuro-based Fuzzy Inference System (ANFIS) and compares their prediction indices. Obtaining knowledge about what the heat load of buildings would be in architectural design stage is necessary to forecast the building performance and take precautions against any possible failure. The best accuracy and prediction power of novel soft computing techniques would assist the practical way of this process. For this purpose, four inputs, namely, wall overall heat transfer coefficient, building area/ volume ratio, total external surface area and total window area/total external surface area ratio were employed in each model of this study. The predicted heat load is evaluated comparatively using simulation outputs. The ANN model estimated the heat load of the case apartments with a rate of 97.7% and the MAPE of 5.06%; while these ratios are 98.6% and 3.56% in Mamdani fuzzy inference systems (FL); 99.0% and 2.43% in ANFIS. When these values were compared, it was found that the ANFIS model has become the best learning technique among the others and can be applicable in building energy performance studies. © 2016. All Rights Reserved.
  • Conference Object
    Citation - WoS: 5
    Citation - Scopus: 7
    Modeling Leakage of Ephemeral Secrets in Tripartite/Group Key Exchange
    (Institute of Electronics, Information and Communication, Engineers, IEICE, 2013) Manulis, Mark; Suzuki, Koutarou; Ustaoglu, Berkant
    We propose a security model, referred as g-eCK model, for group key exchange that captures essentially all non-trivial leakage of static and ephemeral secret keys of participants, i.e., group key exchange version of extended Canetti-Krawczyk (eCK) model. Moreover, we propose the first one-round tripartite key exchange (3KE) protocol secure in the g-eCK model under the gap Bilinear Diffie-Hellman (gap BDH) assumption and in the random oracle model.
  • Article
    Citation - WoS: 1
    Artificial Neutral Networks To Predict Design Properties for Cemented Embankment Layers of High Speed Train Rail Ways
    (Foundation Cement, Lime, Concrete, 2013) Egeli, İsfendiyar; Tayfur, Gökmen; Yılmaz, E.; Uşun, Handan
    I. EGELI, G. TAYFUR, E. YILMAZ, H. USUN ARTIFICIAL NEURAL NETWORKS TO PREDICT DESIGN PROPERTIES FOR CEMENTED EMBANKMENT LAYERS OF HIGH SPEED TRAIN RAILWAYS Cement-Wapno-Beton, Vol. XVIII/LXXX, 2013, No 1, p. 10 High-speed train railway (HSTR) embankment is a complicated process, as it deals with high geometric design standards and material properties. In this study the replaceability of fill strata without cement prepared subgrade layer and with cement addition one is investigated. In the experiments the specimens composed of natural sand with different cement additions and two w/c ratios were used. The Plaxis-FEM (2D) program was employed to find the maximum expected total settlements of HSTR embankments with cemented subgrade layer. Furthermore, the artificial neural networks model was constructed to predict the failure stress, elasticity modulus and strains. The sensivity analysis has revealed that cement content was the most sensitive for stress and elasticity modulus predictions, while the curing age of specimens was for the strain forecast.
  • Article
    Citation - WoS: 4
    Citation - Scopus: 5
    Dma: Matrix Based Dynamic Itemset Mining Algorithm
    (IGI Global Publishing, 2013) Oğuz, Damla; Yıldız, Baroş; Ergenç, Belgin
    Updates on an operational database bring forth the challenge of keeping the frequent itemsets up-to-date without re-running the itemset mining algorithms. Studies on dynamic itemset mining, which is the solution to such an update problem, have to address some challenges as handling i) updates without re-running the base algorithm, ii) changes in the support threshold, iii) new items and iv) additions/deletions in updates. The study in this paper is the extension of the Incremental Matrix Apriori Algorithm which proposes solutions to the first three challenges besides inheriting the advantages of the base algorithm which works without candidate generation. In the authors' current work, the authors have improved a former algorithm as to handle updates that are composed of additions and deletions. The authors have also carried out a detailed performance evaluation study on a real and two benchmark datasets.
  • Article
    Citation - WoS: 31
    Citation - Scopus: 65
    The Performance of the Cms Muon Detector in Proton-Proton Collisions at Root S=7 Tev at the Lhc
    (IOP Publishing Ltd., 2013) Karapınar, Güler
    The performance of all subsystems of the CMS muon detector has been studied by using a sample of proton-proton collision data at root s = 7TeV collected at the LHC in 2010 that corresponds to an integrated luminosity of approximately 40 pb(-1). The measured distributions of the major operational parameters of the drift tube (DT), cathode strip chamber (CSC), and resistive plate chamber (RPC) systems met the design specifications. The spatial resolution per chamber was 80-120 mu m in the DTs, 40-150 mu m in the CSCs, and 0.8-1.2 cm in the RPCs. The time resolution achievable was 3 ns or better per chamber for all 3 systems. The efficiency for reconstructing hits and track segments originating from muons traversing the muon chambers was in the range 95-98%. The CSC and DT systems provided muon track segments for the CMS trigger with over 96% efficiency, and identified the correct triggering bunch crossing in over 99.5% of such events. The measured performance is well reproduced by Monte Carlo simulation of the muon system down to the level of individual channel response. The results confirm the high efficiency of the muon system, the robustness of the design against hardware failures, and its effectiveness in the discrimination of backgrounds.
  • Article
    Citation - WoS: 5
    Citation - Scopus: 5
    Comparison of Advanced Daylighting Systems To Improve Illuminance and Uniformity Through Simulation Modelling
    (Znack Publishing House, 2014) Kazanasmaz, Zehra Tuğçe; Fırat Örs, Pelin
    Deficiencies in daylighting performance (illuminance and uniformity) of educational facilities may cause health problems, work performance loss and excessive energy consumption. The varying nature of daylight in daily and yearly basis is a strong challenge on that matter. Advanced daylighting systems have been developed to overcome this challenge. Improving the daylighting performance of existing buildings is another difficulty in daylighting design. Daylighting design needs should be carefully considered at the initial design stages of the buildings. So, the aim of this study was to improve the illuminance and uniformity in four selected architectural design studios in Izmir. Measurements of daylight illuminance were conducted in May and June 2012. Simulation models were built in Ecotect/Radiance. To reach the best daylighting performance, simulations were carried out by Desktop Radiance with applying laser cut panels, prismatic panels and light shelves. It is considered that retrofitting efforts after the construction would be inadequate regarding daylighting, unless complying with the standards during the design process.
  • Article
    Citation - WoS: 31
    Citation - Scopus: 32
    Alignment of the Cms Tracker With Lhc and Cosmic Ray Data
    (IOP Publishing Ltd., 2014) Karapınar, Güler; Demir, Durmuş Ali
    The central component of the CMS detector is the largest silicon tracker ever built. The precise alignment of this complex device is a formidable challenge, and only achievable with a significant extension of the technologies routinely used for tracking detectors in the past. This article describes the full-scale alignment procedure as it is used during LHC operations. Among the specific features of the method are the simultaneous determination of up to 200 000 alignment parameters with tracks, the measurement of individual sensor curvature parameters, the control of systematic misalignment effects, and the implementation of the whole procedure in a multiprocessor environment for high execution speed. Overall, the achieved statistical accuracy on the module alignment is found to be significantly better than 10 mu m.
  • Article
    Citation - WoS: 456
    Citation - Scopus: 423
    Description and Performance of Track and Primary-Vertex Reconstruction With the Cms Tracker
    (IOP Publishing Ltd., 2014) Demir, Durmuş Ali; CMS Collaboration
    A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For t (t) over bar events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p(T) > 0.9GeV is 94% for pseudorapidities of vertical bar eta vertical bar < 0.9 and 85% for 0.9 < vertical bar eta vertical bar < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p(T) = 100GeV emitted at vertical bar eta vertical bar < 1.4, the resolutions are approximately 2.8% in p(T), and respectively, 10 m m and 30 mu m in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10-12 mu m in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 3
    Automated Labeling of Cancer Textures in Larynx Histopathology Slides Using Quasi-Supervised Learning
    (Science Printers and Publishers Inc., 2014) Önder, Devrim; Sarıoğlu, Sülen; Karaçalı, Bilge
    OBJECTIVE: To evaluate the performance of a quasisupervised statistical learning algorithm, operating on datasets having normal and neoplastic tissues, to identify larynx squamous cell carcinomas. Furthermore, cancer texture separability measures against normal tissues are to be developed and compared either for colorectal or larynx tissues. STUDY DESIGN: Light microscopic digital images from histopathological sections were obtained from laryngectomy materials including squamous cell carcinoma and nonneoplastic regions. The texture features were calculated by using co-occurrence matrices and local histograms. The texture features were input to the quasisupervised learning algorithm. RESULTS: Larynx regions containing squamous cell carcinomas were accurately identified, having false and true positive rates up to 21% and 87%, respectively. CONCLUSION: Larynx squamous cell carcinoma versus normal tissue texture separability measures were higher than colorectal adenocarcinoma versus normal textures for the colorectal database. Furthermore, the resultant labeling performances for all larynx datasets are higher than or equal to that of colorectal datasets. The results in larynx datasets, in comparison with the former colorectal study, suggested that quasi-supervised texture classification is to be a helpful method in histopathological image classification and analysis.