The large amount of text on the Internet cause people hard to understand the meaning in a short limit time. Topic models (e.g. LDA and PLSA) has been proposed to summarize the long text into several topic terms. In the recent years, the short text media such as tweet is very popular. However, directly applies the transitional topic model on the short text corpus usually gating non-coherent topics. Because there is no enough words to discover the word co-occurrence pattern in a short document. In this paper, we solve the lack of the local word co-occurrence problem in LDA. Thus, we proposed an improvement of word cooccurrence method to enhance the topic models. We generate new virtual documents by re-organizing the words in documents and just apply in the traditional LDA. The experimental result that show our RO-LDA method gets well results in the noisy Tweet dataset and the regular news title dataset. Moreover, there are two advantages in our methods. We do not need any external data and our proposed methods are based on the original topic model that we did not modify the model itself, thus our methods can easily apply to some other existing LDA based models.