Fast Many-to-One Voice Conversion using Autoencoders

Yusuke Sekii, Ryohei Orihara, Keisuke Kojima, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga

2017

Abstract

Most of voice conversion (VC) methods were dealing with a one-to-one VC issue and there were few studies that tackled many-to-one / many-to-many cases. It is difficult to prepare the training data for an application with the methods because they require a lot of parallel data. Furthermore, the length of time required to convert a speech by Deep Neural Network (DNN) gets longer than pre-DNN methods because the DNN-based methods use complicated networks. In this study, we propose a VC method using autoencoders in order to reduce the amount of the training data and to shorten the converting time. In the method, higher-order features are extracted from acoustic features of source speakers by an autoencoder trained with source speakers’ data. Then they are converted to higher-order features of a target speaker by DNN. The converted higher-order features are restored to the acoustic features by an autoencoder trained with data drawn from the target speaker. In the evaluation experiment, the proposed method outperforms the conventional VC methods that use Gaussian Mixture Models (GMM) and DNNs in both one-to-one conversion and many-to-one conversion with a small training set in terms of the conversion accuracy and the converting time.

References

  1. Chen, L. H., Ling, Z. H., Song, Y., and Dai, L. R. (2013). Joint spectral distribution modeling using restricted boltzmann machines for voice conversion. In Proc. INTERSPEECH, pages 3052-3056.
  2. Desai, S., Raghavendra, E. V., Yegnanarayana, B., Black, A. W., and Prahallad, K. (2009). Voice conversion using artificial neural networks. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3893-3896.
  3. Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507.
  4. Kawahara, H., Morise, M., Takahashi, T., Nisimura, R., Irino, T., and Banno, H. (2008). Tandem-straight: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, f0, and aperiodicity estimation. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3933-3936.
  5. Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Proc. International Conference for Learning Representations (ICLR).
  6. Liu, L. J., Chen, L. H., Ling, Z. H., and Dai, L. R. (2015). Spectral conversion using deep neural networks trained with multi-source speakers. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4849-4853.
  7. Mohammadi, S. H. and Kain, A. (2014). Voice conversion using deep neural networks with speaker-independent pre-training. In Proc. Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 19-23.
  8. Nair, V. and Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proc. International Conference on Machine Learning, pages 807-814. Omnipress.
  9. Nakashika, T., Takashima, R., Takiguchi, T., and Ariki, Y. (2013). Voice conversion in high-order eigen space using deep belief nets. In Proc. INTERSPEECH, pages 369-372.
  10. Nakashika, T., Takiguchi, T., and Ariki, Y. (2015). Voice conversion using speaker-dependent conditional restricted boltzmann machine. EURASIP Journal on Audio, Speech, and Music Processing, 2015(1):1-12.
  11. Nguyen, H. Q., Lee, S. W., Tian, X., Dong, M., and Chng, E. S. (2016). High quality voice conversion using prosodic and high-resolution spectral features. Multimedia Tools and Applications, 75(9):5265-5285.
  12. Stylianou, Y., Cappe, O., and Moulines, E. (1998). Continuous probabilistic transform for voice conversion. IEEE Transactions on Speech and Audio Processing, 6(2):131-142.
  13. Toda, T., Black, A. W., and Tokuda, K. (2007). Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory. IEEE Transactions on Audio, Speech, and Language Processing, 15(8):2222-2235.
  14. Toda, T., Ohtani, Y., and Shikano, K. (2006). Eigenvoice conversion based on gaussian mixture model. In Proc. INTERSPEECH 2006 - Ninth International Conference on Spoken Language Processing (ICSLP), pages 2446-2249.
  15. Wu, Z., Chng, E. S., and Li, H. (2013). Conditional restricted boltzmann machine for voice conversion. In Proc. IEEE China Summit International Conference on Signal and Information Processing (ChinaSIP), pages 104-108.
  16. Xie, F.-L., Qian, Y., Fan, Y., Soong, F. K., and Li, H. (2014). Sequence error (SE) minimization training of neural network for voice conversion. In Proc. INTERSPEECH, pages 2283-2287.
Download


Paper Citation


in Harvard Style

Sekii Y., Orihara R., Kojima K., Sei Y., Tahara Y. and Ohsuga A. (2017). Fast Many-to-One Voice Conversion using Autoencoders . In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 164-174. DOI: 10.5220/0006193301640174


in Bibtex Style

@conference{icaart17,
author={Yusuke Sekii and Ryohei Orihara and Keisuke Kojima and Yuichi Sei and Yasuyuki Tahara and Akihiko Ohsuga},
title={Fast Many-to-One Voice Conversion using Autoencoders},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={164-174},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006193301640174},
isbn={978-989-758-220-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Fast Many-to-One Voice Conversion using Autoencoders
SN - 978-989-758-220-2
AU - Sekii Y.
AU - Orihara R.
AU - Kojima K.
AU - Sei Y.
AU - Tahara Y.
AU - Ohsuga A.
PY - 2017
SP - 164
EP - 174
DO - 10.5220/0006193301640174