In a Machine Learning framework, there are three fundamental components that play a crucial role: feature extraction, feature selection, and decision model. In the context of Green Learning, we have Saab transform and Least-squares Normal Transform for feature extraction. Regarding feature selection, we have the Discriminant Feature Test (DFT) and Relevant Feature Test (RFT). However, we do not have a green and interpretable solution for the decision model. For a long time, we applied gradient-boosted trees such as XGBoost or LightGBM as the classifier. Yet, it is known that XGBoost or LightGBM models sacrifice interpretability for performance. Also, the large model size of XGBoost or LightGBM is becoming a huge burden for Green Learning. Therefore, we are motivated to develop a green and interpretable classifier called SLMBoost. The idea is to train a boosting model with the Subspace Learning Machine (SLM). 

Let’s start by looking at a single SLM. In each SLM, we will first identify a discriminant subspace by a series of feature selection techniques, including DFT, RFT, removing correlated features, etc. Then, each SLM will learn a linear regression model on the selected subspace. Figure 1 illustrates a single SLM. To sum up, a single SLM is a linear least square model that operates on a subspace.

Further, we ensemble SLMs in a boosting fashion, which involves training a sequence of SLMs. In this approach, an SLM is trained to correct the errors made by the previous SLMs. To achieve this, the training target of an SLM is the residual of the accumulated result from all the SLMs before it. Two key factors to make this boosting work are changing different features and data subsets. By using DFT/RFT, we can zoom in on different feature subspaces. Moreover, we also switch between different feature sets, such as IPHop-II and IPHop-III features. To learn with different data subsets, we apply filtering policies to filter out confident samples during boosting. Intuitively, we will gradually focus more on the hard samples following the boosting sequence. 

After applying the residual correction through a sequence of SLMs, our model has become significantly more powerful than a simple linear regression model. Despite its enhanced capabilities, each SLM remains essentially a linear regression model and is consequently easy to interpret. Additionally, the number of parameters in each SLM is equal to the subspace dimension multiplied by the total boosting model, which is considerably smaller than XGBoost and LightGBM. In summary, SLMBoost is a promising direction for building a green and interpretable decision model.