This paper proposes a fusion model that merges the context-aware multimodal information of JADE (Java Agent DEvelopment Framework). The context-aware multimodal information system is developed from the multi-heterogeneous context sensing devices. This multimodal not only gathers multidimensional data that aims to recognize and analyze the collected emotion information, but also emotion manages the context-aware information. According to the collections of users use remote control usages during watching TV and the face recognition technology, we developed a context-aware multimodal information system to recognize emotions. The emotion information is reasoned from the action data of remote control usages that combines with the emotion information gathered from the face recognition. These two information fuses with the feedback mechanism of real emotion to acquire the information of personal emotion representation. This fusion model of context-aware multimodal information provides personal emotion information and learning mechanism to reason the information from contextaware ubiquitous environment applied on personal emotion prediction.