Classification between story-telling and poem recitaiton using head gestures of the talker
Published in 12th International Conference on Signal Processing and Communication(SPCOM), 2018
In this work, we show that head gestures while reciting poems(rhythmic speech) have more periodic structure than while narrating stories. We use a dataset of 10 subjects reciting 20 poems and a seperate set of 20 subjects, each narrating 5 stories. To show the periodicity of head gestures, we use a measure based on the autocorrelation of the input signal. Using the information of peaks in the autocorrelation of an input signal, we achieve a highest periodicity of 0.489 in case of poems and a highest periodicity of 0.347 in case of stories. We further perform a classification task to classify between spontaneous speech and rhythmic speech. We show that head gesture features perform comparably well to the acoustic features(MFCCs), with accuracies 89% and 96% approximately. We further show that combination of head gestures and acoustic features outperform the acoustic features.