Computing semantic similarity between words and phrases has important applications in natural language processing, information retrieval, and artificial intelligence. There are two prevailing approaches to computing word similarity, based on either using of a thesaurus (e.g., WordNet) or statistics from a large corpus. We provide a hybrid approach combining the two methods that is demonstrated on a web site through two services: one that returns a similarity score for two words or phrases and another that takes a word and shows a ranked list of the most similar words.
Our statistical method is based on distributional similarity and Latent Semantic Analysis. We further complement it with semantic relations extracted from WordNet. The whole process is automatic and can be trained using different corpora. We assume the semantics of a phrase is compositional on its component words and apply an algorithm to compute similarity between two phrases using word similarity.
The algorithms, implementation and data for this work were developed by Lushan Han as part of his research on developing easier ways to query linked open data collections. It was supported by grants from AFOSR (FA9550-08-1-0265), NSF (IIS-1250627) and a give from Microsoft. Contact umbcsim at cs.umbc.edu for more information.