Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification

Article


Panagakis, Y., Kotropoulos, C. and Arce, G. 2010. Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE Transactions on Audio, Speech, and Language Processing. 18 (3), pp. 576-588. https://doi.org/10.1109/TASL.2009.2036813
TypeArticle
TitleNon-negative multilinear principal component analysis of auditory temporal modulations for music genre classification
AuthorsPanagakis, Y., Kotropoulos, C. and Arce, G.
Abstract

Motivated by psychophysiological investigations on the human auditory system, a bio-inspired two-dimensional auditory representation of music signals is exploited, that captures the slow temporal modulations. Although each recording is represented by a second-order tensor (i.e., a matrix), a third-order tensor is needed to represent a music corpus. Non- negative multilinear principal component analysis (NMPCA) is proposed for the unsupervised dimensionality reduction of the third-order tensors. The NMPCA maximizes the total tensor scatter while preserving the non-negativity of auditory representations. An algorithm for NMPCA is derived by exploiting the structure of the Grassmann manifold. The NMPCA is compared against three multilinear subspace analysis techniques, namely the non-negative tensor factorization, the high-order singular value decomposition, and the multilinear principal component analysis as well as their linear counterparts, i.e., the non-negative matrix factorization, the singular value decomposition, and the principal components analysis in extracting features that are subsequently classified by either support vector machine or nearest neighbor classifiers.Three different sets of experiments conducted on the GTZAN and the ISMIR2004 Genre datasets demonstrate the superiority of NMPCA against the aforementioned subspace analysis techniques in extracting more discriminating features, especially when the training set has small cardinality. The best classification accuracies reported in the paper exceed those obtained by the state-of the-art music genre classification algorithms applied to both datasets.

PublisherInstitute of Electrical and Electronics Engineers (IEEE)
JournalIEEE Transactions on Audio, Speech, and Language Processing
ISSN1558-7916
Publication dates
Print01 Mar 2010
Online17 Nov 2009
Publication process dates
Deposited06 Mar 2018
Accepted29 Sep 2009
Output statusPublished
Digital Object Identifier (DOI)https://doi.org/10.1109/TASL.2009.2036813
LanguageEnglish
Permalink -

https://repository.mdx.ac.uk/item/87841

  • 11
    total views
  • 0
    total downloads
  • 0
    views this month
  • 0
    downloads this month

Export as