Electrical - Electronic Engineering / Elektrik - Elektronik Mühendisliği

Permanent URI for this collectionhttps://hdl.handle.net/11147/11

Browse

Search Results

Now showing 1 - 2 of 2
  • Article
    Citation - WoS: 4
    Ramcess 2.x Framework-Expressive Voice Analysis for Realtime and Accurate Synthesis of Singing
    (Springer Verlag, 2008) d'Alessandro, Nicolas; Babacan, Onur; Bozkurt, Barış; Dubuisson, Thomas; Holzapfel, Andre; Kessous, Loic; Vlieghe, Maxime
    In this paper we present the work that has been achieved in the context of the second version of the RAMCESS singing synthesis framework. The main improvement of this study is the integration of new algorithms for expressive voice analysis, especially the separation of the glottal source and the vocal tract. Realtime synthesis modules have also been refined. These elements have been integrated in an existing digital instrument: the HANDSKETCH 1.X, a bimanual controller. Moreover this digital instrument is compared to existing systems.
  • Article
    Citation - WoS: 43
    Citation - Scopus: 59
    Causal-Anticausal Decomposition of Speech Using Complex Cepstrum for Glottal Source Estimation
    (Elsevier Ltd., 2011) Drugman, Thomas; Bozkurt, Barış; Dutoit, Thierry
    Complex cepstrum is known in the literature for linearly separating causal and anticausal components. Relying on advances achieved by the Zeros of the Z-Transform (ZZT) technique, we here investigate the possibility of using complex cepstrum for glottal flow estimation on a large-scale database. Via a systematic study of the windowing effects on the deconvolution quality, we show that the complex cepstrum causal-anticausal decomposition can be effectively used for glottal flow estimation when specific windowing criteria are met. It is also shown that this complex cepstral decomposition gives similar glottal estimates as obtained with the ZZT method. However, as complex cepstrum uses FFT operations instead of requiring the factoring of high-degree polynomials, the method benefits from a much higher speed. Finally in our tests on a large corpus of real expressive speech, we show that the proposed method has the potential to be used for voice quality analysis.