EN
Support Vector Machines (SVM) are a state-of-the-art classification method, but they are also suitable, after a special reformulation, to perform a regression task. Similarly to classification, for a nonlinear regression problem, SVMs use the kernel trick and map the input space into a high-dimensional feature space first, and then perform linear regression in the high-dimensional feature space. One can use the model ensemble approach to try to improve the prediction accuracy. The paper presents the comparison of a single SVM, aggregated SVM and other regression models (linear regression, Projection Pursuit Regression, Neural Networks, Regression Trees, Random Forest, Bagging) by the means of a mean squared test set error.