Classification-oriented machine learning models have been well-studied in the past decades. The focus has shifted to deep learning (DL) in recent years. Feature learning and classification are handled jointly in DL models. Although the best performance of classification tasks is often achieved by DL through back propagation (BP), DL models suffer from lack of interpretability, high computational cost and high model complexity. Feature extraction and classification are treated as separate modules in classical machine learning. We focus on the classical learning paradigm and propose a new high-performance classifier with features as the input. Examples of classical classifiers include support vector machine (SVM), decision tree (DT) , multilayer perceptron(MLP) feedforward multilayer perceptron(FF-MLP) and extreme learning machine (ELM). SVM, DT and FF-MLP share one common idea, i.e., feature space partitioning. Inspired by the MLP, the DT and the ELM, a new classification model, called the subspace learning machine (SLM), is proposed aiming at general classification tasks.

The SLM attempts to efficiently partition the input feature space into multiple discriminant subspaces in a hierarchical manner and it works as follows: First, SLM identifies a discriminant subspace by examining the discriminant power of input features. Then, it applies random projections to input discriminant subspace features to yield p 1D subspaces and finds optimal partitions in each of them. This is equivalent to partitioning input space with p hyper-planes whose orientations and biases are determined by random projections and partitions, respectively.  Among p projections, we develop a criterion to choose the best q partitions that yield 2q partitioned subspaces. The subspace partitioning process is repeated at each child node.  When the samples are sufficiently pure at a child node, the partitioning process stops and SLM makes final predictions. SLM offers a light-weight and mathematically transparent classifier.

— By Hongyu Fu