Charades is a guessing game with the idea for one player to act out a semantic concept (i.e. a word or phrase) for the other players to guess. An observation from playing charades is that people's cognition on the iconic movements associated with a semantic concept would be often inconsistent, and this fact has long been ignored in the multimedia research. Therefore, the novelty of this work is to propose an automation for mining the most representative videos for each semantic concept as its iconic movements from a large set of related videos containing various human actions. The discovered iconic movements can be further employed to benefit a broad range of tasks, such as human action recognition and retrieval. For our purpose, a new video benchmark is also presented and the experiments demonstrated our approach potential to human action based applications.