Welocalize Update on Neural Machine Translation

Neural Machine Translation (NMT) is currently one of the most discussed topics in the globalization and localization industry. Born out of a shift towards artificial intelligence and deep learning, NMT is very much cited as a future technology that will be able to translate high volumes of quality content. Over the past few years, researchers and academic institutions have been shifting focus from statistical machine translation (SMT) towards developing neural networks to improve the speed and quality range of translation output.

Dave Landan, Senior Computational Linguist at Welocalize, works on the development of NMT solutions at Welocalize. His blog, Neural MT is the Next Big Thing, published in May 2016, gives an expert and comprehensive account of the history of MT and the emergence of SMT and NMT. In this latest blog, Dave provides expert insights into industry developments in NMT and how Welocalize continues to invest in NMT to bring it further into commercial use.

NMT is an emerging technology, and both academic institutions and MT organizations are still in the early stages of developing NMT offerings for commercial use. Investment and development of NMT by the large technology firms continues, with both Microsoft and Google now offering generic NMT systems, translating between English and a limited number of locales.  While most solutions continue to use Recurrent Neural Networks (RNNs), Facebook AI Research has released open-source code using Convolutional Neural Networks (CNNs), which offer the potential of faster training. The translation industry continues to be dominated by statistical machine translation (SMT) in production, with NMT only recently emerging from the lab.

At Welocalize, our goal is to provide the best value in quality-to-cost ratio for our clients’ requirements. We deliver that via translation or post-editing, whether NMT, SMT or a hybrid program through continuing both partner engagements and an investment in our own research and development.  We’ve expanded our own NMT research to three separate code bases, and we have contributed code to the OpenNMT project.  We’re also are using GPU compute clusters in the cloud and investing in more in-house hardware to expand our NMT training capabilities as well.

You may have read or heard of the “rare word problem” for NMT – because vocabulary size is fixed at training time, NMT systems aren’t as well-suited as SMT systems to handling rare or unseen words in production.  We’re making good progress on limiting the effects of the rare word problem using a variety of techniques, and we’ve carried out some very promising experiments with adapting generic models to client- and topic-specific data.

If you want to get started with NMT, we recommend you do so with one or two language pairs that are traditionally difficult for SMT systems, like English – Chinese, English – Japanese, or even English – German.  The truth is that in many cases, for well-established language pairs like English – Spanish or English – Portuguese, customized SMT systems do as well as (and often better than) the nascent NMT systems.

Developing customized MT engines, whether neural or statistical, will continue to be the most optimal approach to clients’ MT needs. There is room and demand for both methods. Every client has its own terminology, style, tone and voice, and we take these factors into consideration when developing new MT programs, just as we have done with the MT-driven solutions that many of our Fortune 500 clients enjoy.

Dave

Based in Dublin, Ireland, Dave Landan is Senior Computational Linguist at Welocalize.