Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/11147/7148
Browse
10 results
Search Results
Now showing 1 - 10 of 10
Article AI-Supported Seismic Performance Evaluation of Structures: Challenges, Gaps, and Future Directions at Early Design Stages(Elsevier Sci Ltd, 2026) Ak, Fatma; Ekici, Berk; Demir, UgurThis study reviews 91 journal articles that intersect with earthquake-resistant building design and artificial intelligence (AI)- based modeling, utilizing machine learning, deep learning, and metaheuristic optimization algorithms. Previous reviews on AI applications have examined engineering problems without considering the impact of architectural design parameters and structural irregularities on seismic performance. This review discusses the role of AI in integrating architectural design variables and seismic performance objectives, highlighting challenges, gaps, and future directions in the early design phase. The reviewed articles demonstrate that AI is successful in addressing seismic performance objectives; however, a holistic framework for assessing architectural and structural variables has not been presented. The review highlights key findings, gaps, and future directions for those involved in earthquake-resistant building design utilizing AI.Article Knowledge-Based Training of Learning Architectures Under Input Sensitivity Constraints for Improved Explainability(Pergamon-Elsevier Science Ltd, 2026) Sildir, Hasan; Erturk, Emrullah; Edizer, Deniz Tuna; Deliismail, Ozgun; Durna, Yusuf Muhammed; Hamit, BahtiyarThe traditional machine learning (ML) training problem is unconstrained and lacks an explicit formulation of the underlying driving phenomena. Such a formulation, based solely on experimental data, does not ensure the delivery of qualitative knowledge among variables due to many theoretical issues in the optimization task. This study further tightens Artificial Neural Networks (ANNs) training by including input sensitivities as additional constraints and applies to regression and classification tasks based on literature data. In theory, such sensitivity represents the change direction of the target variable per change in measurements from indicators. The resulting nonlinear optimization problem is solved th rough a rigorous solver and includes the sensitivity expressions through algorithmic differentiation. Compared to traditional methods, with an acceptable decrease in the prediction capability, the proposed model delivers more intuitive, explainable, and experimentally verifiable predictions under input variable variations, under robustness to overfitting, while serving robust identification tasks. A classification case study includes a patient-oriented clinical decision support system development based on the impact of cancer-indicating variables. A competitive test prediction accuracy is obtained compared to commonly used algorithms despite 10 % decrease in the training. The regression case is built upon the energy load estimation to account for prominent considerations to obtain desired sensitivity patterns and proposed methodology delivers significant accuracy drop compared to some formulations to address knowledge patterns. The approach delivers a compatible pattern with practitioner expertise and is compared to widely used machine learning algorithms, whose performances are evaluated through common statistics in addition to multi-variable response graphs.Article An Alternative Software Benchmarking Dataset: Effort Estimation With Machine Learning(Elsevier Science Inc, 2026) Yurum, Ozan Rasit; Unlu, Huseyin; Demirors, OnurEffort estimation plays a vital role in software project planning, as accurate estimates of required human resources are essential for success. Traditional estimation models often depend on historical size and effort data, yet organizations frequently struggle to access reliable effort records. Public benchmarking datasets like ISBSG offer useful data but may lack coverage or involve licensing fees. To address this issue, we previously introduced a free, extendable benchmarking dataset that integrates functional size and effort data extracted from 18 studies. In this study, we examine the effectiveness of our dataset for predictive effort estimation and compare it with the widely used ISBSG dataset. Our analysis includes 337 records from our dataset and 732 ISBSG projects, focusing on those with COSMIC size data. We first developed and compared models using linear regression and nine machine learning algorithms - Bayesian Ridge, Ridge Regression, Decision Tree, Random Forest, XGBoost, LightGBM, k-Nearest Neighbors, Multi-Layer Perceptron, and Support Vector Regression. Then, we selected the best-performing models and applied them to an unseen evaluation dataset to assess their generalization performance. The results show that machine learning performance varies based on evaluation method and dataset characteristics. Despite having fewer records, our dataset enabled more accurate predictions than ISBSG in most cases, highlighting its potential for effort estimation. This study demonstrates the viability of our dataset for building predictive models and supports the use of machine learning in improving estimation accuracy. Expanding this dataset could offer a valuable, open-access resource for organizations seeking effective and lowcost estimation solutions.Article Comprehensive Analysis and Machine Learning-Based Solutions for Drift Behavior in Ambient Atomic Force Microscope Conditions(Pergamon-Elsevier Science Ltd, 2025) Deveci, D. Gemici; Barandir, T. Karakoyun; Unverdi, O.; Celebi, C.; Temur, L. O.; Atilla, D. C.This study outlines the effectiveness of combining numerical methods, Computer Vision (CV) and Machine Learning (ML) approaches to analyze and predict drift behavior in high-resolution Atomic Force Microscope (AFM) scanning procedures. Using Long Short-Term Memory (LSTM) models for time series analysis and the Light Gradient Boosting Machine (LightGBM) algorithm for predictive modeling, significant progress was achieved in understanding the dynamic and variable nature of drift and mitigating its impact on scanning. The models demonstrated a robust predictive capability, achieving approximately 94% accuracy in drift predictions. The study emphasizes the nonstationary characteristics of drift and demonstrates how the selection of features directly related to the target variable enhances the efficiency of the model and enables adaptive real-time correction. These findings confirm the predictive strength of the models and highlight the potential for integrating ML predictions with real-time feedback mechanisms to improve the resolution and stability of AFM imaging in both scientific and industrial applications.Article Citation - WoS: 2Citation - Scopus: 1A Novel Framework for Droplet/Particle Size Distribution in Suspension Polymerization Using Physics-Informed Neural Network (PINN)(Elsevier Science Sa, 2025) Turan, Meltem; Dutta, AbhishekA Machine Learning (ML) based neural network can capture the complex evolution of polymer chain distributions, accounting for factors such as initiation, propagation, and termination steps in a suspension polymerization process, by integrating stagewise molar balance model (MBM) and population balance model (PBM) with Physics-Informed Neural Network (PINN). The integrated PINN framework is proposed to efficiently solve these equations, incorporating known physical laws as constraints and minimizing errors in both the distribution and dynamics of the polymer chains. By optimizing the neural network parameters such as weight matrices and bias vector, the model reproduces the moments of the polymer molecular weight distribution in close alignment with numerical solutions, and it generates population balance solutions that exhibit excellent agreement with their analytical counterparts. Sensitivity analyses for the depth of the neural network architecture to quantify how structural choices affect model fidelity has been performed. The resulting MBM-PINN and PBM-PINN integrated framework demonstrates robustness and versatility in accurately capturing (96-97%) droplet/particle dynamics. The proposed methodology has the capability to provide a powerful tool for faster and scalable simulations of polymerization reactions, enabling better prediction of product properties which could be used for optimizing reaction conditions in industrial applications.Article Technology-Enhanced Multimodal Learning Analytics in Higher Education: a Systematic Literature Review(Institute of Electrical and Electronics Engineers Inc., 2025) Raşıt Yürüm, O.Multimodal learning analytics (MMLA) is an emerging field of learning analytics and promises a more comprehensive analysis of the learning process thanks to advances in technological devices and data science. The purpose of this study was to explore technology-enhanced multimodal learning analytics in higher education systematically. A systematic literature review was performed using the PRISMA guidelines, and 45 studies published between January 2012 and June 2024 were determined. The findings demonstrated that China, the USA, Australia, and Chile were the leading contributors to MMLA research, with a notable surge in publications in 2021. Audio recorders, cameras, webcams, eye trackers, and wristbands were the most used devices. Most studies were conducted in experiment rooms or laboratories, though studies in authentic classroom settings have been growing. Data were primarily collected during activities such as programming, simulation exercises, presentations, discussions, writing, watching videos, reading, or exams, as well as throughout the entire instructional process, predominantly in computer science, health, and engineering courses. The studies were mainly predictive or descriptive whereas quite a few studies were prescriptive. Frequently tracked data types included audio, gaze, log, facial expression, physiological, and behavioral data. Traditional machine learning and basic statistics were the commonly used analytical methods whilst advanced statistics and deep learning were relatively less utilized. Test performance, engagement, emotional state, debugging performance, and learning experience were the popular target variables. The studies also pointed out several implications and future directions, with a significant portion highlighting the development of interventions, frameworks, or adaptive systems using MMLA. © 2013 IEEE.Article Predicting the Area Moment of Inertia of Beam and Column Using Machine Learning and Hypernetexplorer(Springer Science and Business Media Deutschland GmbH, 2025) Aydın, Y.; Nigdeli, S.M.; Roozbahan, M.; Bekdaş, G.; Işıkdağ, Ü.Beams and columns are the most important elements of steel frame structures. Damage to the beam or column can lead the structure to serious hazards and cause collapse. In the structural engineering literature, it has been observed that there is not much work for area moment of inertia estimation of beam and column. The aim of this study was to predict the area moment of inertia of beam and column using HyperNetExplorer developed by the authors. This method aims to bring innovation by optimizing artificial neural networks (ANNs). In this study, a prediction study is performed using 306 collected data on beam and column area moment of inertia. Classical ML models (linear regression (LR), decision tree regression (DTR), K neighbors regression (KNN), polynomial regression (PR), random forest regression (RFR), gradient boosting regression (GBR), histogram gradient boosting regression (HGBR)) and NAS and HyperNetExplorer were applied to predict beam and column area moment of inertia. The prediction performances were compared using different performance metrics (coefficient of determination (R2) and mean squared error (MSE)) and HyperNetExplorer developed by the authors showed the highest performance (R2 = 0.98, MSE = 246.88). Furthermore, SHapley additive explanations (SHAP) were used to explain the effects of features in the prediction models and it was observed that the most effective features for model predictions were loading on beam and length. The results show that the proposed NAS base approach and the developed tool, HyperNetExplorer, provides better performance when compared with classical ML methods. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.Article Citation - WoS: 3Citation - Scopus: 3Data Driven Modeling and Design of Cellulose Acetate-Polysulfone Blend Ultrafiltration Membranes Based on Artificial Neural Networks(Elsevier Ltd, 2025) Gungormus, E.This study aimed to develop and validate an Artificial Neural Networks (ANNs) model for the design and optimization of cellulose acetate-polysulfone blend ultrafiltration membranes, produced via the Non-Solvent Induced Phase Separation method. After some data science applications on a comprehensive dataset obtained from literature studies, the ultimate ANNs model exhibited superior predictive capabilities and effectively captured complex nonlinear relationships in the data. The optimum model configuration with a single hidden layer containing six neurons provided reliable predictions by avoiding overfitting and underfitting risks and significantly reducing error metrics. The model analyzed the effects of input variables on outputs, revealing that different stages of the membrane preparation process had varying impacts on performance metrics. This finding emphasized the importance of systematically optimizing the preparation process to enhance overall membrane performance. The model's predictions showed strong agreement with experimental data, further validating its accuracy. The optimum production conditions identified by the model offered significant improvements in membrane performance. Moreover, the model accelerated the membrane development process by reducing the required number of experimental trials and promoting efficient resource utilization. This approach contributed to both economic and environmental sustainability by reducing production costs and energy consumption. This study highlighted the significant potential of machine learning techniques for future innovations and advancements in this field by enabling precise, efficient, and sustainable membrane design and synthesis. © 2025 Elsevier Ltd.Article Citation - WoS: 1Tcgex: a Powerful Visual Interface for Exploring and Analyzing Cancer Gene Expression Data(Springernature, 2025) Kus, M. Emre; Sahin, Cagatay; Kilic, Emre; Askin, Arda; Ozgur, M. Mert; Karahanogullari, Gokhan; Ekiz, H. AtakanAnalyzing gene expression data from the Cancer Genome Atlas (TCGA) and similar repositories often requires advanced coding skills, creating a barrier for many researchers. To address this challenge, we developed The Cancer Genome Explorer (TCGEx), a user-friendly, web-based platform for conducting sophisticated analyses such as survival modeling, gene set enrichment analysis, unsupervised clustering, and linear regression-based machine learning. TCGEx provides access to preprocessed TCGA data and immune checkpoint inhibition studies while allowing integration of user-uploaded data sets. Using TCGEx, we explore molecular subsets of human melanoma and identify microRNAs associated with intratumoral immunity. These findings are validated with independent clinical trial data on immune checkpoint inhibitors for melanoma and other cancers. In addition, we identify cytokine genes that can be used to predict treatment responses to various immune checkpoint inhibitors prior to treatment. Built on the R/Shiny framework, TCGEx offers customizable features to adapt analyses for diverse research contexts and generate publication-ready visualizations. TCGEx is freely available at https://tcgex.iyte.edu.tr, providing an accessible tool to extract insights from cancer transcriptomics data.Article Citation - WoS: 3Citation - Scopus: 4Comparison of Conventional and Machine Learning Models for Kinetic Modelling of Biomethane Production From Pretreated Tomato Plant Residues(Elsevier, 2025) Fidan, Berrak; Bodur, Fatma-Gamze; Oztep, Gulsh; Gungoren-Madenoglu, Tuelay; Baba, Alper; Kabay, NalanTomato plant residues (Solanum lycopersicum L.) lack sustainable applications as abundant lignocellulosic biomass after harvest. These residues can be utilized as substrates in anaerobic digestion for biomethane production, generating energy and reducing waste. The purpose of this study was to investigate the sustainable utilization of tomato plant residues for biomethane production at varying conditions and to model biological kinetics. The study aimed to evaluate the effects of varying substrate/inoculum ratios, sulfuric acid pretreatment concentrations, and yeast (Saccharomyces cerevisiae) addition on biogas and biomethane yields under mesophilic conditions (37 degrees C). Maximum biogas and biomethane yields in the studied range were obtained when the substrate/inoculum ratio was 3 (g substrate/g inoculum), the sulfuric acid concentration used for residue pretreatment was 2 %v/v, and the substrate/yeast ratio was 10 (g substrate/g yeast). The yeast ratio of 10 increased the cumulative biogas and biomethane production by 96.5 and 128.9%, respectively. Conventional models (Modified Gompertz, Cone, First-order, Logistic) and Machine Learning models (Support Vector Machine and Neural Network) were compared for biological kinetics. Machine Learning models were also observed to give good fitting results similar to conventional models. Results suggest that Machine Learning models (RMSE: 2.5833-12.0500) are reliable methods like conventional kinetic models (RMSE: 2.1796-13.4880) for forecasting biomethane production in anaerobic digestion processes and Machine Learning models can be applied without needing prior understanding of biomethane production kinetics.
