Text similarity modeling plays an important role in a variety of applications of Natural Language Processing (NLP), such as information retrieval, text clustering, and plagiarism detection. Moreover, it can work as an automatic evaluation metric in natural language generation, like machine translation and image captioning, so that expensive and time-consuming human labeling can be saved.

Word Mover’s Distance (WMD) [1] is an efficient model to measure the semantic distance of two texts. In WMD, word embedding which learns semantically meaningful representations for words are incorporated in earth mover’s distance. The distance between two texts A and B is the minimum cumulative distance that all words from the text A needs to travel to match exactly the text B.

We try to incorporate syntactic parsing, which brings meaningful structure information, into WMD in our work. There are mainly two parts that can control the flow in WMD. One is the distance matrix and the flow of each word. Firstly, to compute the distance matrix, the original WMD only compares an individual pair of word embeddings to measure the distance between words and doesn’t consider other information in the sentence. To measure the distance between words better, we first form sub-tree structures from the dependency parsing tree. Instead of only comparing the similarity of the word embeddings, we also compare the sub-tree similarity that contains the words. Secondly, A word’s flow can be regarded as the word’s importance. If giving more flow to important words, the most flow will transport between important words. So, the total transportation cost is mainly decided by the similarity of important words. We currently utilize the word’s dependency relation in the parsing tree to assign importance weights for words. In the future, we aim to combine these two parts to get more improvement.

(Image 1 is an illustration of the word mover’s distance, and Image 2 is an example of dependency parsing tree [2])

 

References:

[1] Kusner, Matt, et al. “From word embeddings to document distances.” International conference on machine learning. PMLR, 2015.

[2] Daniel, Jurafsky, and H. Martin James. “Speech and language processing.” (2000).

 

— By Chengwei Wei