Socially-Enriched Multimedia Data Co-Clustering
Abstract
Heterogeneous data co-clustering is a commonly used technique for tapping the rich meta-information of multimedia web documents, including category, annotation, and description, for associative discovery. However, most co-clustering methods proposed for heterogeneous data do not consider the representation problem of short and noisy text and their performance is limited by the empirical weighting of the multimodal features. This chapter explains how to use the Generalized Heterogeneous Fusion Adaptive Resonance Theory (GHF-ART) generalized heterogeneous fusion adaptive resonance theory for clustering large-scale web multimedia documents. Specifically, GHF-ART is designed to handle multimedia data with an arbitrarily rich level of meta-information. For handling short and noisy text, GHF-ART employs the representation and learning methods of PF-ART as described in Sect. 3.5, which identify key tags for cluster prototype modeling by learning the probabilistic distribution of tag occurrences of clusters. More importantly, GHF-ART incorporates an adaptive method for effective fusion of the multimodal features, which weights the features of multiple data sources by incrementally measuring the importance of feature modalities through the intra-cluster scatters. Extensive experiments on two web image datasets and one text document set have shown that GHF-ART achieves significantly better clustering performance and is much faster than many existing state-of-the-art algorithms. The content of this chapter is summarized and extended from [12]
Recommended Citation
L. Meng et al., "Socially-Enriched Multimedia Data Co-Clustering," Advanced Information and Knowledge Processing, pp. 111 - 135, Springer London, May 2019.
The definitive version is available at https://doi.org/10.1007/978-3-030-02985-2_5
Department(s)
Electrical and Computer Engineering
Research Center/Lab(s)
Center for High Performance Computing Research
International Standard Serial Number (ISSN)
1610-3947
Document Type
Book - Chapter
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2019 Springer London, All rights reserved.
Publication Date
01 May 2019