9th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing

Research Article

Building Multi-model Collaboration in Detecting Multimedia Semantic Concepts

Download685 downloads
  • @INPROCEEDINGS{10.4108/icst.collaboratecom.2013.254110,
        author={Hsin-Yu Ha and Fausto Fleites and Shu-Ching Chen},
        title={Building Multi-model Collaboration in Detecting Multimedia Semantic Concepts},
        proceedings={9th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing},
        publisher={ICST},
        proceedings_a={COLLABORATECOM},
        year={2013},
        month={11},
        keywords={semantic concept detection multi-model fusion feature correlation},
        doi={10.4108/icst.collaboratecom.2013.254110}
    }
    
  • Hsin-Yu Ha
    Fausto Fleites
    Shu-Ching Chen
    Year: 2013
    Building Multi-model Collaboration in Detecting Multimedia Semantic Concepts
    COLLABORATECOM
    IEEE
    DOI: 10.4108/icst.collaboratecom.2013.254110
Hsin-Yu Ha1,*, Fausto Fleites1, Shu-Ching Chen1
  • 1: Florida International University
*Contact email: hha001@cs.fiu.edu

Abstract

The booming multimedia technology is incurring a thriving multi-media data propagation. As multimedia data have become more essential, taking over a major potion of the content processed by many applications, it is important to leverage data mining methods to associate the low-level features extracted from multimedia data to high-level semantic concepts. In order to bridge the semantic gap, researchers have investigated the correlation among multiple modalities involved in multimedia data to effectively detect semantic concepts. It has been shown that multimodal fusion plays an important role in elevating the performance of both multimedia content-based retrieval and semantic concepts detection. In this paper, we propose a novel cluster-based ARC fusion method to thoroughly explore the correlation among multiple modalities and classification models. After combining features from multiple modalities, each classification model is built on one feature cluster, which is generated from our previous work FCC-MMF. The correlation between medoid of a feature cluster and a semantic concept is introduced to identify the capability of a classification model. It is further applied with the logistic regression method to refine ARC fusion method proposed in our previous work for semantic concept detection. Several experiments are conducted to compare the proposed method with other related works and the proposed method has outperform other works with higher Mean Average Precision (MAP).