In the past few decades, there were quite a few learning algorithms developed to extract knowledge from data. However, none of the single algorithms can be applicable to learn all the datasets with favor results because data patterns may represent linear and non-linear. Accordingly, the idea of aggregating the predictions of multiple learning models to improve the forecasting accuracy of a single method was proposed. Nevertheless, how to improve the accuracy of the aggregated predictions when learning small datasets is the objective of this study. Based on the distributions of the predictive errors of learning models, the proposed method learns the weights of the models and then tries to aggregate more precise predictions with the weights. The experiment results show the forecasting errors of the predictions aggregated by the proposed method are significantly lower than the predictions of single models and the averaged predictions.