What is it about?

In this research article, we study the problem of employing a neural machine translation model to translate Arabic dialects to Modern Standard Arabic. The proposed solution of the neural machine translation model is prompted by the recurrent neural network-based encoder-decoder neural machine translation model that has been proposed recently, which generalizes machine translation as sequence learning problems. We propose the development of a multitask learning (MTL) model which shares one decoder among language pairs, and every source language has a separate encoder. The proposed model can be applied to limited volumes of data as well as extensive amounts of data. Experiments carried out have shown that the proposed MTL model can ensure a higher quality of translation when compared to the individually learned model.

Featured Image

Why is it important?

Experiments demonstrate that given small parallel training data, the multitask neural machine translation model is effective in generating the correct sequence, produces translations of high quality, and learns the predictive structure of multiple targets


Writing this article was a great pleasure as it has co-authors with whom I have had long standing collaborations. This research is considered to be the first work that utilizes deeplearning models and Multi-task Learning to perform the translation task from Arabic dialects to the Modern Slandered Arabic

laith Baniata
Kyungpook National University

Read the Original

This page is a summary of: A Neural Machine Translation Model for Arabic Dialects That Utilizes Multitask Learning (MTL), Computational Intelligence and Neuroscience, December 2018, Hindawi Publishing Corporation, DOI: 10.1155/2018/7534712.
You can read the full text:

Open access logo



The following have contributed to this page