Random subspace method for multivariate feature selection

More Info
expand_more

Abstract

In a growing number of domains data captured encapsulates as many features as possible. This poses a challenge to classical pattern recognition techniques, since the number of samples often still is limited with respect to the number of features. Classical pattern recognition methods suffer from the small sample size, and robust classification techniques are needed.

In order to reduce the dimensionality of the feature space, the selection of informative features becomes an essential step towards the classification task. The relevance of the features can be evaluated either individually (univariate approaches), or in a multivariate manner. Univariate approaches are simple and fast, therefore appealing. However, possible correlation and dependencies between the features are not considered. Therefore, multivariate search techniques may be helpful. Several limitations restrict the use of multivariate searches. First, they are prone to overtraining, especially in p n (many features and few samples) settings. Secondly, they can be computationally too expensive when dealing with a large feature space.

We introduce a new multivariate search technique, that is less sensitive to the noise in the data and computationally feasible as well. We compare our approach with several multivariate and univariate feature selection techniques, on an artificial dataset which provides us with ground truth information, and on a real dataset. The results show the importance of multivariate search techniques and the robustness and reliability of our novel multivariate feature selection method.