Graph is a data representation model. Each data point is considered as a node and an edge/connection exists between nodes if there is any common characteristics. The relationship that exists between nodes is complex and attracts research in this domain. Several techniques have been developed like Deep Walk, Planetoid, Chebyshev, Graph Convolution Network, Graph Attention Network, Large Scale Graph Convolution Network, and so on, which focuse on exploring the behavior of the nodes based on their connectivity to different nodes. Graph models are often designed for tasks like Node classification, Edge/Link prediction, and has varied applications in social network, citation networks.
Currently we are developing a Graph Neural Network model for node classification task of a Graph. Feedforward based approach is adopted to learn the network parameters in a single forward pass using Graph Hop Method. The main idea is to learn the node’s representation making use of their hop’s (neighboring node’s) representation to better represent and learn from local to global attribute perspective through information exchange between the hops, by subsequently growing the dimension of the feature vector and reducing the dimensionality using SaaB transform.
Unlike the methods/techniques which are already developed, our model’s computational complexity is very low for the fact that no back propagation is utilized for learning the parameters of the network model, but through feedforward design the model learns in a single forward pass. The Graph Hop Method serves as a unique method for driving the model to train on very less training samples yet provide better accuracies/results for testing samples. Thus, the model is capable to train on very limited labelled data. Making use of only 5% of training samples, we are able to achieve the state of art performance comparatively. We are as well trying to investigate different pre-processing techniques and their impact on our model.
by Jessica Chen, Bin Wang, Joe Wang, Chaitra Suresh