Intensity and Phase Stacked Analysis of a 40-Otdr System Using Deep Transfer Learning and Recurrent Neural Networks

dc.contributor.author Kayan, Ceyhun Efe
dc.contributor.author Yüksel Aldoğan, Kıvılcım
dc.contributor.author Gümüş, Abdurrahman
dc.date.accessioned 2023-04-19T12:36:46Z
dc.date.available 2023-04-19T12:36:46Z
dc.date.issued 2023
dc.description.abstract Distributed acoustic sensors (DAS) are effective apparatuses that are widely used in many application areas for recording signals of various events with very high spatial resolution along optical fibers. To properly detect and recognize the recorded events, advanced signal processing algorithms with high computational demands are crucial. Convolutional neural networks (CNNs) are highly capable tools to extract spatial information and are suitable for event recognition applications in DAS. Long short-term memory (LSTM) is an effective instrument to process sequential data. In this study, a two-stage feature extraction methodology that combines the capabilities of these neural network architectures with transfer learning is proposed to classify vibrations applied to an optical fiber by a piezoelectric transducer. First, the differential amplitude and phase information is extracted from the phasesensitive optical time domain reflectometer (40-OTDR) recordings and stored in a spatiotemporal data matrix. Then, a state-of-the-art pre-trained CNN without dense layers is used as a feature extractor in the first stage. In the second stage, LSTMs are used to further analyze the features extracted by the CNN. Finally, a dense layer is used to classify the extracted features. To observe the effect of different CNN architectures, the proposed model is tested with five state-of-the-art pre-trained models (VGG-16, ResNet-50, DenseNet-121, MobileNet, and Inception-v3). The results show that using the VGG-16 architecture in the proposed framework manages to obtain a 100% classification accuracy in 50 trainings and got the best results on the 40-OTDR dataset. The results of this study indicate that pre-trained CNNs combined with LSTM are very suitable to analyze differential amplitude and phase information represented in a spatiotemporal data matrix, which is promising for event recognition operations in DAS applications. (c) 2023 Optica Publishing Group en_US
dc.description.sponsorship Turkiye Bilimsel ve Teknolojik Arastirma Kurumu [BIDEB-2219-1059B191600612] en_US
dc.description.sponsorship Turkiye Bilimsel ve Teknolojik Arastirma Kurumu (BIDEB-2219-1059B191600612). en_US
dc.identifier.doi 10.1364/AO.481757
dc.identifier.issn 1559-128X
dc.identifier.issn 2155-3165
dc.identifier.scopus 2-s2.0-85149144733
dc.identifier.uri https://doi.org/10.1364/AO.481757
dc.identifier.uri https://hdl.handle.net/11147/13312
dc.language.iso en en_US
dc.publisher Optica Publishing Group en_US
dc.relation.ispartof Applied Optics en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.subject Recognition en_US
dc.title Intensity and Phase Stacked Analysis of a 40-Otdr System Using Deep Transfer Learning and Recurrent Neural Networks en_US
dc.type Article en_US
dspace.entity.type Publication
gdc.author.institutional Kayan, Ceyhun Efe
gdc.author.institutional Yüksel Aldoğan, Kıvılcım
gdc.author.institutional Gümüş, Abdurrahman
gdc.author.scopusid 57792931300
gdc.author.scopusid 24831988400
gdc.author.scopusid 35315599800
gdc.bip.impulseclass C4
gdc.bip.influenceclass C5
gdc.bip.popularityclass C4
gdc.coar.access open access
gdc.coar.type text::journal::journal article
gdc.collaboration.industrial false
gdc.description.department İzmir Institute of Technology. Electrical and Electronics Engineering en_US
gdc.description.departmenttemp [Kayan, Ceyhun Efe; Aldogan, Kivilcim Yuksel; Gumus, Abdurrahman] Izmir Inst Technol, Elect & Elect Engn, Izmir, Turkiye en_US
gdc.description.endpage 1764 en_US
gdc.description.issue 7 en_US
gdc.description.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
gdc.description.scopusquality Q3
gdc.description.startpage 1753 en_US
gdc.description.volume 62 en_US
gdc.description.wosquality Q3
gdc.identifier.openalex W4318703927
gdc.identifier.pmid 37132922
gdc.identifier.wos WOS:000952540800001
gdc.index.type WoS
gdc.index.type Scopus
gdc.oaire.diamondjournal false
gdc.oaire.impulse 8.0
gdc.oaire.influence 3.2257357E-9
gdc.oaire.isgreen true
gdc.oaire.keywords FOS: Computer and information sciences
gdc.oaire.keywords Computer Science - Machine Learning
gdc.oaire.keywords Sound (cs.SD)
gdc.oaire.keywords Computer Vision and Pattern Recognition (cs.CV)
gdc.oaire.keywords Computer Science - Computer Vision and Pattern Recognition
gdc.oaire.keywords Computer Science - Sound
gdc.oaire.keywords Machine Learning (cs.LG)
gdc.oaire.keywords Audio and Speech Processing (eess.AS)
gdc.oaire.keywords FOS: Electrical engineering, electronic engineering, information engineering
gdc.oaire.keywords Electrical Engineering and Systems Science - Audio and Speech Processing
gdc.oaire.popularity 1.0234037E-8
gdc.oaire.publicfunded false
gdc.oaire.sciencefields 02 engineering and technology
gdc.oaire.sciencefields 01 natural sciences
gdc.oaire.sciencefields 0103 physical sciences
gdc.oaire.sciencefields 0202 electrical engineering, electronic engineering, information engineering
gdc.openalex.collaboration National
gdc.openalex.fwci 2.32237457
gdc.openalex.normalizedpercentile 0.86
gdc.openalex.toppercent TOP 1%
gdc.opencitations.count 10
gdc.plumx.crossrefcites 5
gdc.plumx.mendeley 12
gdc.plumx.pubmedcites 1
gdc.plumx.scopuscites 12
gdc.scopus.citedcount 12
gdc.wos.citedcount 9
relation.isAuthorOfPublication.latestForDiscovery ce5ce1e2-17ef-4da2-946d-b7a26e44e461
relation.isOrgUnitOfPublication.latestForDiscovery 9af2b05f-28ac-4018-8abe-a4dfe192da5e

Files