•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.3906/elk-1604-437

Abstract

In this work, we extract prosodic features from subjects while they are talking while walking using human-gait-speech data. These human-gait-speech data are separated into 1D data (human-speech) and 2D data (human-gait) using the adaptive-lifting scheme of wavelet transform. We carry out extraction of prosodic features from human-speech data, such as speech duration, pitch, speaking rate, and speech momentum, using five different natural languages (Hindi, Bengali, Oriya, Chhattisgarhi, and English) for the detection of behavioral patterns. These behavioral patterns form real-valued measured parameters, stored in a knowledge-based model called the human-gait-speech model. Extraction of geometrical features from human-gait data, such as step length, energy or effort, walking speed, and gait momentum, is carried out for the authentication of behavioral patterns. In this paper, the data of 25 subjects of different ages, talking in five different natural languages while walking, are analyzed for the detection of behavioral patterns.

Keywords

Adaptive-lifting scheme of wavelet transform, out-of-corpus, blind speech signal separation, modified adaptive vector quantization

First Page

2820

Last Page

2830

Share

COinS