Posts

Welocalize Update on Neural Machine Translation

Neural Machine Translation (NMT) is currently one of the most discussed topics in the globalization and localization industry. Born out of a shift towards artificial intelligence and deep learning, NMT is very much cited as a future technology that will be able to translate high volumes of quality content. Over the past few years, researchers and academic institutions have been shifting focus from statistical machine translation (SMT) towards developing neural networks to improve the speed and quality range of translation output.

Dave Landan, Senior Computational Linguist at Welocalize, works on the development of NMT solutions at Welocalize. His blog, Neural MT is the Next Big Thing, published in May 2016, gives an expert and comprehensive account of the history of MT and the emergence of SMT and NMT. In this latest blog, Dave provides expert insights into industry developments in NMT and how Welocalize continues to invest in NMT to bring it further into commercial use.

NMT is an emerging technology, and both academic institutions and MT organizations are still in the early stages of developing NMT offerings for commercial use. Investment and development of NMT by the large technology firms continues, with both Microsoft and Google now offering generic NMT systems, translating between English and a limited number of locales.  While most solutions continue to use Recurrent Neural Networks (RNNs), Facebook AI Research has released open-source code using Convolutional Neural Networks (CNNs), which offer the potential of faster training. The translation industry continues to be dominated by statistical machine translation (SMT) in production, with NMT only recently emerging from the lab.

At Welocalize, our goal is to provide the best value in quality-to-cost ratio for our clients’ requirements. We deliver that via translation or post-editing, whether NMT, SMT or a hybrid program through continuing both partner engagements and an investment in our own research and development.  We’ve expanded our own NMT research to three separate code bases, and we have contributed code to the OpenNMT project.  We’re also are using GPU compute clusters in the cloud and investing in more in-house hardware to expand our NMT training capabilities as well.

You may have read or heard of the “rare word problem” for NMT – because vocabulary size is fixed at training time, NMT systems aren’t as well-suited as SMT systems to handling rare or unseen words in production.  We’re making good progress on limiting the effects of the rare word problem using a variety of techniques, and we’ve carried out some very promising experiments with adapting generic models to client- and topic-specific data.

If you want to get started with NMT, we recommend you do so with one or two language pairs that are traditionally difficult for SMT systems, like English – Chinese, English – Japanese, or even English – German.  The truth is that in many cases, for well-established language pairs like English – Spanish or English – Portuguese, customized SMT systems do as well as (and often better than) the nascent NMT systems.

Developing customized MT engines, whether neural or statistical, will continue to be the most optimal approach to clients’ MT needs. There is room and demand for both methods. Every client has its own terminology, style, tone and voice, and we take these factors into consideration when developing new MT programs, just as we have done with the MT-driven solutions that many of our Fortune 500 clients enjoy.

Dave

Based in Dublin, Ireland, Dave Landan is Senior Computational Linguist at Welocalize.

Neural Machine Translation is the Next Big Thing

BulbWelocalize Senior Computational Linguist, Dave Landan, writes about the trends in machine translation (MT), neural machine translation (NMT) and takes us through the evolution of MT. He shares insights on how Welocalize is using cutting-edge innovation and technologies in its language tools solutions and MT programs.

It’s been almost nine years since Koehn et al. published Moses: Open Source Toolkit for Statistical Machine Translation1  in 2007, which fundamentally changed the way machine translation (MT) was done. But this was not the first fundamental shift in MT, and it looks like it won’t to be the last. To ensure our clients receive world-class levels of innovation in the area of language technology, we are working with what we are pretty sure will be the next big thing in MT. More about that to follow, but first a little context about how MT has evolved.

Brief History of MT

The field of MT began in earnest in the 1950s, first with bilingual dictionaries that permitted only word-by-word translation.  Translations by this method are seldom fluent. They are easily tripped up by polysemous words which are words with more than one meaning like “bank” or “Java,” and are often very difficult to understand by someone who doesn’t know what the intended meaning is beforehand.

From this beginning, the Next Big Thing was the introduction of rule-based machine translation (RBMT).  First there was direct RBMT, which used basic rules on top of the bilingual dictionaries.  Those helped with word order problems, but still didn’t address the other problems.  Next, we saw the introduction of transfer RBMT, which added more rules to deal with morphology and syntax to address those problems.  These systems can give performance that is quite good, but because of the richness of language, the systems are often incomplete in vocabulary coverage, syntactic coverage, or both.  RBMT is also expensive because it requires humans (linguists) to write all the rules and maintain the dictionaries that the systems use.  Still, due in part to the high cost of computing resources, RBMT dominated the field between 2000 and 2010.  There are still companies that offer good RBMT solutions today, often hybrid solutions combining RBMT with SMT.

Statistical Machine Translation (SMT)

Thanks to increased computing power at a lower cost and some pioneering research from IBM around 1990, work on statistical machine translation (SMT) began to take off in the late-1990’s and early-2000’s. In 2007, Moses was earmarked as the next big thing in MT; however, it wasn’t until 2010-2012 that it became the foundation upon which nearly every commercial SMT system was based.  SMT shifted the focus from linguists writing rules to acquiring aligned corpora, which are required to train SMT systems.  SMT has limitations as well. Language pairs that have different word order are particularly tricky and unless you have vast amounts of computing resources, modeling long-term dependencies between words or phrases is nearly impossible.

There have been incremental improvements to SMT over the past several years, including SMT using hierarchical models, and the introduction of linguistic meta-data for grammar-informed models. Nothing has come along that had such a huge impact as the jump from word-by-word to RBMT, or from RBMT to SMT, until now.

Neural Machine Translation (NMT)

Over the past two years, researchers have been working on using sequence-to-sequence mapping with artificial neural networks to develop what’s being called neural machine translation (NMT).  Essentially, they use recurrent neural networks to build a system that learns to map a whole sentence from source to target all at once, instead of word-by-word, phrase-by-phrase, or n-gram-by-n-gram.  This eliminates the problems of long-term dependencies and word-ordering, because the system learns whole sentences at once.  Indeed, some researchers are looking at extending beyond the limitations of the sentence to whole paragraphs or even documents. Document-level translation would theoretically eliminate our need for aligned files and allow us to train on transcreated material, which is unthinkable in any system available today.

NMT has shortcomings as well. Neural networks require a lot of training data, on the order of one million sentence pairs, and there’s currently no good solution to translating rare or unseen words and out of vocabulary (OOV) words.  There have been a few proposals on how to address this problem, nothing firm yet.  At Welocalize, we’re actively pursuing ideas of our own on how to fix the OOV problem for client data and we’re also working on how to overcome the amount of client data necessary to train a good NMT system.

The other major shift is that in order to train large neural networks efficiently, this requires a different set of hardware.  SMT requires a lot of memory to store phrase tables and training can be “parallelized” to work better on CPUs with multiple cores.  NMT on the other hand requires high-end GPUs (yes, video cards) for training.  We’ve invested in the infrastructure necessary to do the work and we’re working hard to get this exciting new technology ready for our clients to use.  Our early results with a variety of domain-specific data sets are very promising.

We’re not alone in our excitement. Many talks and posters at MT conferences are dedicated to advancement and progress in NMT. Google and Microsoft are both working on ways to use NMT in their translation products, with a special interest in how NMT can significantly improve fluency in translation between Asian and European languages. Watch this space in the weeks and months to come for updates on our progress with this exciting technology.

Dave

Dave Landan is Senior Computational Linguist at Welocalize. David.Landan@welocalize

Welocalize is a bronze sponsor at EAMT 2016. Click here for more information.

Read Welocalize & Trend Micro MT Case Study: MT Suitability Pilot Shortens Translation Times & Reduce Costs

1 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, Prague, Czech Republic.

 

Welocalize Language Tools Team Highlights EAMT 2015 Conference

The Welocalize Language Tools Team recently presented at the 2015 EAMT Conference in Antalya, Turkey.  Olga Beregovaya, Welocalize VP of Language Tools and Automation was the invited guest speaker at the conference.  She presented, What we want, what we need, what we absolutely can’t do without – an enterprise user’s perspective on machine translation technology and stuff around it,with the main objective of promoting collaboration between academia and field users. Olga also presented with Welocalize Senior Computational Linguist Dave Landan “Streamlining Translation Workflows with Welocalize StyleScorer, as part of the project and product description poster session.

In this blog, Olga Beregovaya, Dave Landan and Dave Clarke, Principal Engineer for the Language Tools Team, share their insights from the 2015 EAMT Conference.

HIGHLIGHTS FROM OLGA BEREGOVAYA

Olga Beregovaya gives her impressions of EAMT 2015 and highlights her favorite presentations from the user track.

As a global language service provider, the language technology and translation automation strategy is very important. The EAMT conference and associated conferences are excellent forums to attend as the team can share real-life MT production experiences and learn more about the latest innovations and research projects. As always, there were many interesting research papers and posters at EAMT, all delivered by highly-talented colleagues in the field of MT and all describing very innovative and promising approaches.

Welocalize EAMT Poster Presenation 2015I was proud of Welocalize’s own poster presentation, describing work by colleague Dave Landan,  Streamlining Translation Workflows with StyleScorer. Capturing and evaluating the style of both training corpora and target text has traditionally been one of the biggest challenges in the industry. The tool Dave has created allows us to compare style of the input text and the available training data, and build the most relevant MT engine, and also to assess the stylistic consistency of the target text and its adherence to the client’s style guide.

The poster presented by Mārcis Pinnis, Dynamic Terminology Integration Methods in Statistical Machine Translation, was very interesting for the team. Integrating terminology in a linguistically aware way is a major pain point for domain adaptation of SMT engines. Speaking as a program owner, this poster presentation was particularly relevant to our work.

Another very relevant presentation was the paper delivered by Laxström et al, called Content Translation: Computer-assisted translation tool for Wikipedia articles. This presentation talked about a tool created by Wikipedia to promote translation and post-editing of machine-translated articles by Wikipedia users. Community translation is more important for Wikipedia than for any other organization in the world. As content democratization is the key paradigm shift of the modern times, such tools that enable a “casual translator” to contribute and make content available globally have become an essential component of the global content universe.

Finally, Joss Moorkens and Sharon O’Brien presented an excellent poster called Post-Editing Evaluations: Trade-offs between Novice and Professional Participants. Building an efficient and productive supply chain for  post-editing, that would be open to new tools and new ways of working, is an essential component of an LSP MT program success. Joss and Sharon compare the perception of MT output and a new CAT environment by experienced translators and by novice users.

HIGHLIGHTS FROM DAVE LANDAN

Dave Landan, Computational Linguist at Welocalize and EAMT 2015 presenter identified two presentations he found particularly interesting.

This year’s EAMT conference started strong with several interesting talks and papers on a range of topics.  While there were many strong research papers, I would like to mention two that stood out for me. Bruno Pouliquen presented findings on linear interpolation of small, domain-specific models with larger general models. At Welocalize, we hope to try these methods with our own data, and we are optimistic about the possibilities!  The other research paper that stood out for me was by Wäschle and Riezler. This paper presented innovations around using fuzzy matches from monolingual target language documents to improve translations. I am excited about expanding our collaborations with the academic community.

HIGHLIGHTS FROM DAVE CLARKE

Dave Clarke, Principal Engineer at Welocalize is a regular participant at EAMT. One topic that was touched on many times at EAMT 2015 was the evolution of CAT tools and their impact on productivity. He shared the following perspective.

From a technical or tools perspective, the EAMT conference provided considerable insight into how translation tools could and should evolve. One such insight was provided by the best paper award winner, “Assessing linguistically aware fuzzy matching in translation memories,” by Tom Vanallemeersch and Vincent Vandeghinste from the University of Leuven. The algorithms typically used in CAT tools to calculate fuzzy match values from translation memories have little or no linguistic awareness. They are firmly established as stable units in our industry word currency. This paper implemented and tested alternative fuzzy match algorithms that identify potentially useful matches, based on their linguistic similarities. The results were gathered from tests carried out with translation master’s degree students measuring translation time and keystrokes. The results strongly suggest the potential for unlocking further productivity from existing resources.

The other presentation that stood out for me was “Can Translation Memories afford not to use paraphrasing?” by Rohit Gupta, Constantin Orasan, Marcos Zampieri, Mihaela Vela and Josef Van Genabith.

More MT productivity and quality can be achieved with incremental and specialized improvements; however, it will be a cumulative process. Importantly, NLP can drive ‘intelligent’ aids to productivity, including auto-suggest/complete, advanced fuzzy matching and automatic repair and others, within a translator’s working environment. Not all will benefit every user. CAT tool platforms may now evolve so that these innovations can be quickly absorbed into the environment with little cost or effort. This leads to how each translator can maximize their own productivity with the combination of aids that best suits their style of work. We even saw a project from ADAPT in the early stages of developing a platform for CAT tool designers that allows the fast definition and measuring of data during testing of prototype productivity-enhancement functions.

To echo the words of the outgoing EAMT President, Professor Andy Way, it was good to see researchers really getting to grips with specific, known problems. It was encouraging to see more focused work on such errors that we know first-hand to have a particular impact on productivity, for example, improvements in terminology selection, new methods to improve choice of preposition and more. It was also encouraging to see the increase in research presented with supporting data gained from end-user evaluation rather than the automatic evaluation metric staples that have long been the norm. In fact, ‘BLEU scores’ almost, just almost, became a dirty… bi-gram.

“Overall, EAMT 2015 was a great conference, attended by extremely talented people, and we should not forget to mention in beautiful Antalya, Turkey, where the conference was held this year,” Olga Beregovaya.

View Olga Beregovaya’s EAMT presentation, “What we want, what we need, what we absolutely can’t do without – an enterprise user’s perspective on machine translation technology and stuff around it” below.

For more information about Welocalize’s MT program, weMT, click here.

Click the link to see Dave Landan and Olga Beregovaya’s EAMT poster presentation, Streamlining Translation Workflows with StyleScorer: EAMT_POSTER 2015 by Welocalize.

Welocalize EAMT Poster Presenation 2015

Welocalize to Present at 18th European Association for Machine Translation Conference

Frederick, Maryland – May 7, 2015 – Welocalize, global leader in innovative translation and localization solutions, will share industry insight and expertise at the 18th Annual Conference of the European Association for Machine Translation (EAMT) taking place in Antalya, Turkey, May 11-13, 2015, at the WOW Topkapi Palace.

“I am very excited to be taking part as an invited speaker at this year’s EAMT 2015 Conference in Turkey,” said Olga Beregovaya, VP of language tools and automation at Welocalize. “EAMT is an important international conference for the MT community. It is where experts, thought leaders and users of machine translation can meet and share research, findings and new tools to help their language technology strategy.”

Featured Welocalize presentations at the 18th Annual Conference of the European Association for Machine Translation:

  • Welocalize VP of Language Tools and Automation, Olga Beregovaya will deliver her keynote, “What We Want, What We Need, What We Absolutely Can’t Do Without – An Enterprise User’s Perspective on Machine Translation Technology and Stuff Around It” at 9:30 – 10:00am on Tuesday, May 12.
  • Olga Beregovaya along with Welocalize Senior Computational Linguist Dave Landan will be presenting “Streamlining Translation Workflows with Welocalize StyleScorer” as part of the poster project and product description session on Tuesday, May 12.

For more information about the EAMT 2015 conference, visit http://www.eamt2015.org.

About Welocalize – Welocalize, Inc., founded in 1997, offers innovative translation and localization solutions helping global brands to grow and reach audiences around the world in more than 157 languages. Our solutions include global localization management, translation, supply chain management, people sourcing, language services and automation tools including MT, testing and staffing solutions and enterprise translation management technologies. With over 600 employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Italy, Japan and China. www.welocalize.com

Welocalize StyleScorer Helps MT and Linguistic Review Workflow

GettyImages_476511721Innovation is one of Welocalize’s four pillars which form the foundation of everything we do as a business. Clients and partners rely on our leadership to drive technological innovation in the localization industry. One of our latest innovative efforts is the soon-to-be-deployed language tool, Welocalize StyleScorer which will form part of the Welocalize weMT suite of linguistic and automation language tools. One of the driving forces behind StyleScorer is Dave Landan, computational linguist at Welocalize and a key player in many Welocalize MT programs.

In this blog, Dave shares the key components of StyleScorer and how style analysis tools can help the MT and linguistic review workflow.

At Welocalize, we are constantly looking for ways to improve the quality and efficiency of the translation process. Part of my job as a computational linguist is to create tools that help people spend less time on looking for potential problems and more time on fixing them. One of my team’s latest efforts in this area is StyleScorer.

Welocalize StyleScorer is currently in the early deployment testing phase. This tool will be deployed as part of the Welocalize weMT suite of language tools around linguistic analysis and process automation. I’d like to share some of the key components of StyleScorer and the role it will play in the MT and linguistic review workflow.

What is StyleScorer?

Welocalize StyleScorer is a tool that compares a single document to a set of two or more other documents and evaluates how closely they match in terms of writing style. The documents being compared must all be in the same language; however, there is no restriction on what that language is in the source content.

The main difference between StyleScorer and existing style analysis tools is that rather than summarize types of style differences (for example: “17 sentences with passive voice”), it takes a gestalt approach and gives each document a score anywhere between 0 and 4, with 0 being a very poor match to the style and 4 being a very good match.

To do this, StyleScorer uses statistical language modeling as well as innovations from NLP (natural language processing), forensic linguistics and neural networks (machine learning) in order to rate documents on how closely they match the style of an existing body of work. Because it learns from the documents it’s given, even if you don’t have a formal style guide, StyleScorer will still work as long as the training documents can be identified by a human as belonging to a cohesive group.

How does StyleScorer help the MT workflow?

While we think StyleScorer will be very useful as part of the linguistic review workflow for human translation, we are even more excited about how it can benefit the MT (machine translation) workflow at several points of the process both on source and target language documents.

One of the key components to training a successful MT system is starting with a sufficient amount of quality bilingual data. We are seeing more and more clients who are very interested in MT; however, they don’t have a lot of bilingual training data to get started. In the past, the only option available to those clients was a generic MT engine (similar to what you’d get off-the-shelf). This gets someone started in MT, though the quality of generic engines is generally lower than engines trained with documents that match the client’s domain and style.

We can use StyleScorer to filter open-source training data to find additional documents to train from that are closest to the client’s documents. High-scoring open-source data can then be used to augment the client’s training data, which allows us to build better quality MT engines for those clients early in the project life cycle.

If some documents are getting lower quality translations from MT than others, we can use StyleScorer as a sanity check as to whether the source document being translated matches the style of the client’s other documents in the same language and domain. An engine trained exclusively on user manuals probably won’t do well on translating marketing materials. StyleScorer gives us a way to look for those anomalies automatically.

We are particularly excited about using StyleScorer on target language documents to help streamline workflows. If we run StyleScorer on raw MT output, we can use the scores to rank which documents are likely to need more PE (post-editing) effort to bring them in line with the style of known target documents. This is particularly useful for clients with limited budgets for PE and clients with projects that require extremely fast turnaround because it allows us to focus PE work where it is needed the most.

Finally, we envision StyleScorer becoming part of the QA & linguistic review process by spot-checking post-edited and/or human translated documents against existing target language documents. Translations that receive lower scores may need to be double-checked by a linguist to make sure the translations adhere to established style guides. If it turns out that low-scoring translations pass linguistic review, we use them to update the StyleScorer training set for the client’s next batch of documents.

Dave

david.landan@welocalize.com

Based in Portland, Oregon, Dave Landan is a Senior Computational Linguist for Welocalize’s MT and language tools team.