What is it about?
In this research article, we study the problem of employing a neural machine translation model to translate Arabic dialects to Modern Standard Arabic. The proposed solution of the neural machine translation model is prompted by the recurrent neural network-based encoder-decoder neural machine translation model that has been proposed recently, which generalizes machine translation as sequence learning problems. We propose the development of a multitask learning (MTL) model which shares one decoder among language pairs, and every source language has a separate encoder. The proposed model can be applied to limited volumes of data as well as extensive amounts of data. Experiments carried out have shown that the proposed MTL model can ensure a higher quality of translation when compared to the individually learned model.
Featured Image
Photo by Artem Kniaz on Unsplash
Why is it important?
Experiments demonstrate that given small parallel training data, the multitask neural machine translation model is effective in generating the correct sequence, produces translations of high quality, and learns the predictive structure of multiple targets
Perspectives
Read the Original
This page is a summary of: A Neural Machine Translation Model for Arabic Dialects That Utilizes Multitask Learning (MTL), Computational Intelligence and Neuroscience, December 2018, Hindawi Publishing Corporation,
DOI: 10.1155/2018/7534712.
You can read the full text:
Resources
Contributors
The following have contributed to this page