What is it about?

Changing the word order of one language to make it more similar to another helps improve machine translation quality. Neural MT (NMT) is all the rage in MT at the moment, but no-one had tried this for NMT before. Punchline? It doesn't help, not for Japanese<=>English and Chinese<=>English, anyway. But using insights from reordering from the previously dominant approach in MT, statistical MT, does help improve the performance of NMT. In this respect, what we have built is a hybrid MT system.

Featured Image

Why is it important?

MT quality is improving all the time, but it still falls below the quality required for many applications. If we can make the output better, more people are likely to use MT for a range of use-cases. In this paper, we do improve translation quality for Japanese-to-English and Chinese-to-English!

Perspectives

People are very keen to adopt a new approach to MT, and forget anything learned from other older approaches. This paper shows that insights from SMT can help NMT, i.e. that hybrid models can work well. Don't throw the baby out with the bath water!

Andy Way
Dublin City University

Read the Original

This page is a summary of: Pre-Reordering for Neural Machine Translation: Helpful or Harmful?, Prague Bulletin of Mathematical Linguistics, January 2017, De Gruyter,
DOI: 10.1515/pralin-2017-0018.
You can read the full text:

Read

Contributors

The following have contributed to this page