A novel architecture to improve syntactic analysis
So called self-attention models have been hugely successful in a wide range of natural language processing (NLP) tasks, such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. Using a refinement model, member of the Natural language understanding group, James Henderson and Alireza Mohammadshahi demonstrate the power and effectiveness of Recursive Non-autoregressive Graph-to-Graph Transformer (RNGTr) architecture on several dependency corpora. Their aim is to improve the accuracy of syntactic analysis for a corpus of several languages. To achieve this, they propose a novel architecture for the iterative refinement of arbitrary graphs (RNGTr) that combines non-autoregressive edge prediction with conditioning on the complete graph.
More information