Welocalize White Paper – Machine Translation: Neural or Neutral?

MultiLingual Magazine recently published the Welocalize White Paper, Machine Translation: Neural or Neutral? This paper gives fascinating insight into the MT landscape and explores neural machine translation (NMT) considering the notion that NMT should be put into production immediately.

Authored by leading language technology and MT experts at Welocalize, one of the key arguments highlighted in this piece is that the approach should be measured, citing that in the commercial world of MT, neural, statistical and rules-based engines all have a role to play.

Click here to register and download Welocalize White Paper, Machine Translation: Neural or Neutral?

In this white paper, key NMT considerations include:

  • Infrastructure and Cost 
  • Training and Maintenance 
  • Quality 
  • Data 
  • Key Players

Register and download PDF here

If you would like further information about Welocalize MT, visit Welocalize Machine Translation (MT) Solutions or email

Further Reading on Innovators Blog:

Welocalize Update on Neural Machine Translation

Neural Machine Translation is the Next Big Thing

Welocalize Update on Neural Machine Translation

Neural Machine Translation (NMT) is currently one of the most discussed topics in the globalization and localization industry. Born out of a shift towards artificial intelligence and deep learning, NMT is very much cited as a future technology that will be able to translate high volumes of quality content. Over the past few years, researchers and academic institutions have been shifting focus from statistical machine translation (SMT) towards developing neural networks to improve the speed and quality range of translation output.

Dave Landan, Senior Computational Linguist at Welocalize, works on the development of NMT solutions at Welocalize. His blog, Neural MT is the Next Big Thing, published in May 2016, gives an expert and comprehensive account of the history of MT and the emergence of SMT and NMT. In this latest blog, Dave provides expert insights into industry developments in NMT and how Welocalize continues to invest in NMT to bring it further into commercial use.

NMT is an emerging technology, and both academic institutions and MT organizations are still in the early stages of developing NMT offerings for commercial use. Investment and development of NMT by the large technology firms continues, with both Microsoft and Google now offering generic NMT systems, translating between English and a limited number of locales.  While most solutions continue to use Recurrent Neural Networks (RNNs), Facebook AI Research has released open-source code using Convolutional Neural Networks (CNNs), which offer the potential of faster training. The translation industry continues to be dominated by statistical machine translation (SMT) in production, with NMT only recently emerging from the lab.

At Welocalize, our goal is to provide the best value in quality-to-cost ratio for our clients’ requirements. We deliver that via translation or post-editing, whether NMT, SMT or a hybrid program through continuing both partner engagements and an investment in our own research and development.  We’ve expanded our own NMT research to three separate code bases, and we have contributed code to the OpenNMT project.  We’re also are using GPU compute clusters in the cloud and investing in more in-house hardware to expand our NMT training capabilities as well.

You may have read or heard of the “rare word problem” for NMT – because vocabulary size is fixed at training time, NMT systems aren’t as well-suited as SMT systems to handling rare or unseen words in production.  We’re making good progress on limiting the effects of the rare word problem using a variety of techniques, and we’ve carried out some very promising experiments with adapting generic models to client- and topic-specific data.

If you want to get started with NMT, we recommend you do so with one or two language pairs that are traditionally difficult for SMT systems, like English – Chinese, English – Japanese, or even English – German.  The truth is that in many cases, for well-established language pairs like English – Spanish or English – Portuguese, customized SMT systems do as well as (and often better than) the nascent NMT systems.

Developing customized MT engines, whether neural or statistical, will continue to be the most optimal approach to clients’ MT needs. There is room and demand for both methods. Every client has its own terminology, style, tone and voice, and we take these factors into consideration when developing new MT programs, just as we have done with the MT-driven solutions that many of our Fortune 500 clients enjoy.


Based in Dublin, Ireland, Dave Landan is Senior Computational Linguist at Welocalize.

Welocalize Releases New Features In GlobalSight Version 8.7.3

Frederick, Maryland – March 22, 2017 – Welocalize, global leader in innovative translation and localization solutions, announced today the latest public release of the open-source translation management system (TMS), GlobalSight. GlobalSight 8.7.3 delivers added functionality to enable end-users to drive more agile and efficient localization programs to translate higher volumes of global content.

“The latest release of GlobalSight focuses the emerging requirements of our clients including increased use of machine translation and post-editing. GlobalSight 8.7.3 now counts MT as a separate word count and reports can now be generated on MT segments, which reflects the effectiveness of the MT engines,” said Andrew Gibbons, GlobalSight product manager and senior software engineer at Welocalize. “The GlobalSight community will gain great value from the new features and functionality of GlobalSight 8.7.3.”

The main enhancements in GlobalSight 8.7.3 include:

  • Machine Translation (MT). MT can now be classified as a separate word count and users can establish separate translation memories (TM) for MT content. Entire jobs or specified project segments can now be re-translated through MT.
  • Reporting. Post-edit distance reports can now be generated on MT segments. Tailored reports can be generated to measure the effectiveness of the MT engine, referenced against human translation work.
  • Increased connectivity: Enhanced API implementation to increase efficiency of job creation and TM management.

GlobalSight 8.7.3 release notes, including product documentation, bug tracker and forums are available online at The new version of GlobalSight is available for download at

For more information about GlobalSight, please visit

Welocalize, Inc., founded in 1997, offers innovative language services to help global brands reach audiences around the world in more than 175 languages. We provide translation and localization services, talent management, language tools, automation and technology, quality and program management. Our range of managed language services include machine translation, digital marketing, validation and testing, interpretation, staffing and enterprise translation management technologies. We specialize in consumer, technology, manufacturing, learning, oil and gas, travel and hospitality, marketing and advertising, finance, legal and life sciences industry language solutions. With more than 1500+ full-time employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Spain, Italy, Romania, Poland, Japan and China.

Welocalize Highlights from AMTA 2016 Conference

Welocalize recently sponsored and presented at the 12th biennial conference of the Association for Machine Translation in the Americas (AMTA) in Austin, Texas. Welocalize Program Manager, Elaine O’Curran from the Welocalize Technology Solutions Team shares her highlights from the event. Elaine was recently appointed Secretary for the AMTA. Welocalize Vice President of Technology Solutions, Olga Beregovaya was also appointed the new president of the AMTA.

Alex Yanishevsky, Welocalize Senior Manager, Globalization Technology Strategists, presented at the AMTA 2016 conference. Click here to view his presentation, I Ate Too Much Cake Beyond Domain-Specific MT Engines.

The AMTA 2016 Conference had something to offer everyone involved in machine translation (MT). We heard many attendees agree that the AMTA conference is the gathering where you can find real substantive information on MT. The pre- and post-conference tutorials and workshops covered a wide range of topics in-depth, from an introduction to MT in CAT tools, Adaptive MT through to the latest advances in Neural MT.

One afternoon during main conference was devoted to a technology showcase of commercial and research-stage translation technologies. For the main conference, there were three parallel tracks, dedicated respectively to researchers, commercial users, and government users, each featuring original and refereed presentations.

Here are my main highlights from the AMTA 2016 Conference:

Highlight #1: Excellent Plenary Sessions

  • Rico Sennrich (University of Edinburgh) presented on the status and prospects for Neural MT, which is rapidly surpassing current methods. Although fluency is vastly improved, Rico cautions that we have limited ability to interpret and manipulate neural networks (read: lack of control) and more research on terminology integration is needed.
  • Spence Green (Lilt) reviewed the challenges in providing an online service for interactive MT and gave a brief demo of Lilt. Spence and team have clearly invested a lot of time and research to perfect a user interface that is both ergonomic and productive for post-editors.
  • Daniel Marcu (ISI/USC, FairTradeTranslation) moderated a panel on “MT Commercialization: Past, Present, and Future.” The panelists were Macduff Hughes (Google), Valery Jacot (Autodesk), Dragos Stefan Munteanu (SDL), and Chris Wendt (Microsoft). The most interesting part of the discussion touched on current trends that keep the panelists awake at night. Hughes mentioned the stability of the neural model, as we can experience large model changes from small amounts of new data and large changes in translations from minor changes in the source.

The panelists also provided their perspective on the future of MT providers and predicted there will be only a handful of major players 10 years from now.

Highlight #2: Implementation of Adaptive and Interactive MT

We witnessed implementations of adaptive and/or interactive MT by Lilt, ModernMT and SDL, advances which improve the post-editing task for translators and signal a departure from ‘static’ MT. There is a prerequisite, however, that translators post-edit in an online environment. While interactive MT improves suggestions within a segment during typing, adaptive MT works across segments. Adaptive MT learns as a user post-edits a project and the MT system is immediately updated with each confirmed segment. This reduces repetitive edits and increases productivity for translators. Lilt demonstrated both interactive and adaptive MT in their system, while ModernMT and SDL demonstrated adaptive MT.

Highlight #3: Good Enterprise Use-Cases

The AMTA 2016 conference was well represented by the enterprise sector. We saw presentations from Autodesk,, Etsy, Intel, Microsoft and VMware who shared their challenges and successes in evaluating and rolling out MT for various use cases. This year we saw a heightened focus on the use of raw MT, which signals that acceptance of this use case is increasing among MT users.

Highlight #4: Mature MT Programs

My Welocalize colleague, Alex Yanishevsky, presented his second installment on the challenges and opportunities for mature MT programs. Once we reach a scoring plateau, there is an opportunity to push the MT engagement upstream through an automatic analysis of source content suitability and source profiling. This analysis will provide data to improve source authoring and content strategies which ultimately results in better MT output and thus, more productivity by translators.

Click here to download Alex’s presentation, “I Ate Too Much Cake Beyond Domain-Specific MT Engines.”

We are now at a cross-roads while waiting for emergent MT technologies – such as Neural, Adaptive, Interactive – to make the full transition from research to commercially viable solutions for the enterprise scale. These are exciting times for the MT team at Welocalize as we immerse ourselves in evaluations and experiments to benchmark these emergent technologies against the status quo.


Welocalize Program Manager and AMTA Secretary, Elaine O’Curran


Insights on Quality and MT for Localizing User Generated Content 

Samantha HendersonThe premise behind the Welocalize LocLeaders event series is to provide a forum for some of the localization industry’s most influential leaders to discuss new concepts, and how these ideas will shape the future of global business. The question of how to deal with the mind-blowing volumes of source content, which are primarily network and user generated content (UGC) and now flooding our lives daily, is one of the challenges which the localization industry must address.

There is great opportunity for businesses who can apply strategic, innovative approaches to localizing UGC and create greater engagement with customers all over the world. Moreover, the risks of not localizing also warrant keen examination so as not to lose a crucial edge on the competition.

The Welocalize LocLeaders Forum 2016 Montreal panel discussion, “Quality Validation for Network Generated Content,” brought together expertise from attendees who gave valuable insight into the challenges of dealing with evolving types of source content, including UGC.

There are different budgets, different quality expectations and certainly a different sense of urgency for localization depending on who, what, when, why and how the localized content will be consumed.  Source content needs to be categorized, with quality expectations defined, and then a calculation on potential return on investment will determine the priority for localization. UGC usually has a short shelf-life; however, there are examples such as a breaking news announcement, the instant impact and gratification of the message warrants that it is done fast and accurately, or else don’t bother at all.

img_3539The localization budget plays an important role. To translate high volumes of UGC using a more traditional localization approach would be too expensive and time consuming.

There is a lot of buzz around community or crowd-sourcing models, which, for the most part, rely on the goodwill of their user base, as a highly cost-effective and scalable model for both translating and validation of UGC. However, closer evaluation reveals that a ‘crowd’ willing to offer their services for free cannot be expected to mobilize for just any content type. There needs to be a deep-rooted passion for a product or a movement, which, in itself, drives a desire to make sure that consumers in their target market are able to experience the product or message in their native language. If such a community doesn’t exist, then other options need to be explored.

Machine translation (MT) is quickly becoming a standard tool for localizing UGC. Our LocLeaders panelists were all able to provide examples of how MT has been able to speed up time-to-market, increase efficiency and reduce the bottom line for their business.

MT translates content types which would otherwise been overlooked or sunk into traditional localization methods that don’t suit next generation content types like UGC. With a wider usage of MT, the role of the translator is shifting to a post-editor, with a focus on enhancing the raw MT output for better reuse and gradual enhancement of the MT engine quality over time.

The debate over the optimal way to localize UGC is only just beginning. By definition, we expect that users will increasingly devise their own creative methods for rendering source content into target formats. Welocalize aims to stay at the forefront of these developments and we will keep the discussions flowing at future LocLeaders Forums to ensure we continue to drive unique and innovative solutions for our clients.


Samantha Henderson, Senior Client Services Director at Welocalize

Samantha was a featured host at LocLeaders Forum 2016 Montreal for the panel discussion, “Quality Validation for Network Generated Content” with Loy Searle from Intuit, Hanna Kanabiajeuskaja from Box and Andrzej Poblocki from Veritas.  Sam also joined Katie Belanger from Intuit at LocWorld Montreal the same week to present, “Localization Models” The Search for the Optimal Linguistic Resource Model.”  If you would like to reach Sam to learn more about these presentations, reach out to

Welocalize to Present at tekom tcworld Conference 2016 in Germany

Frederick, Maryland – November 7, 2016 – Welocalize, global leader in innovative translation and localization solutions, is presenting at the upcoming tekom tcworld conference 2016 in Germany, taking place in Messe Stuttgart, November 8-10, 2016.  The tekom annual conference, along with the tcworld conference and tekom fair, are the largest global events for technical communications.

tanja_schmidtTanja Schmidt, Welocalize machine translation (MT) program manager and member of Welocalize Technology Solutions team, is a featured speaker at tekom and tcworld annual conference 2016. Tanja will join Christian Weih, chief sales officer at Across Systems GmbH, to deliver a joint presentation, “Data Security in MT Setups,” on Thursday, November 10.

To coincide with the tekom and tcworld conferences, Welocalize will be welcoming clients, prospects and leading localization experts to the inaugural LocLeaders Local 2016 Germany special dinner, networking and panel presentation taking place on Tuesday, November 8, beginning at 17:30 PM at Mövenpick Hotel Stuttgart Airport & Messe. The discussion will be presented in German.

LocLeaders Local 2016 Germany in Stuttgart will be hosted by Garry Levitt, vice president of EMEA at Welocalize. For more information and to register for LocLeaders Local 2016 Germany, visit

tekom-logo-2016For more information about tekom and tcworld conference and the presentation by Welocalize’s Tanja Schmidt, visit

Welocalize, Inc., founded in 1997, offers innovative language services to help global brands reach audiences around the world in more than 175 languages. We provide translation and localization services, talent management, language tools, automation and technology, quality and program management. Our range of managed language services include machine translation, digital marketing, validation and testing, interpretation, staffing and enterprise translation management technologies. We specialize in consumer, technology, manufacturing, learning, oil and gas, travel and hospitality, marketing and advertising, finance, legal and life sciences industry language solutions. With more than 1,000 full-time employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Spain, Italy, Romania, Poland, Japan and China.

School of Advanced Technologies for Translators 2016

tanja_schmidtWelocalize recently took part in the School of Advanced Technologies for Translators (SATT) 2016 in Trento, Italy. Welocalize MT Program Manager Tanja Schmidt was a featured speaker at the event and delivered an industry case presentation which focused on Welocalize’s approach to machine translation (MT). In this follow-up blog, Tanja shares some of her highlights from the event.

SATT 2016 was the first School of Advanced Technologies for Translators organized by Fondazione Bruno Kessler (FBK) in Trento, Italy, this past September. Hopefully this is first of many more of these events to come. The main areas of research lie in technological development and humanities studies and, with its research unit for Human Language Technology, FBK has a dedicated research center for machine translation (MT) and natural language processing (NLP).

logosatt-1SATT 2016 was set up as a combination of various lectures from speakers with an educational, professional or industrial background, hands-on training and a certification course. The course was for MateCat, an enterprise-level online CAT tool that is the result of a 3-year research project by FBK, Translated srl, Université du Maine and the University of Edinburgh.

The first day started with a lecture on how to train “translators to technology” from Ana Guerberof from the Universitat Autònoma de Barcelona (UAB). Ana gave a quick overview of how the translation industry has evolved since the early 1980s and what this means for translation students today. She explained that, due to the increased demand for post-editing especially from translation agencies, UAB introduced the first “Machine Translation and Post-Editing” module in 2009, with topics from MT basics, MT output evaluation and MT engine training to PE basics, different quality levels, controlled language and a lot more.

Other lectures included an introduction to MT and a demo of “Machine Translation in Use,” as well as a speech on the “Evaluation of Machine Translation Quality” from FBK’s Marcello Federico and Luisa Bentivogli. Both managed to present highly specific content in an easy, yet sophisticated way that provided even experienced MT stakeholders with a new understanding on big topics like neural MT. Based on estimations from Marcello, neural MT is about to provide an increase in MT output quality of about 20%.

MT stakeholders in the industry should definitely be on the lookout for Modern MT (, an engine being developed by a consortium consisting of Translated srl, FBK, TAUS and the University of Edinburgh. This engine, which will be available as a plug-in for MateCat and other CAT tools, will provide a ready-to-install application without any additional training requirement. Once fed with training data, Modern MT will be ready to translate and, moreover, it will manage context automatically so that domain-specific engines will become obsolete. This is achieved by storing training segments together with context linking information. The post-editor can then query Modern MT for different domain contexts on a per segment basis.

Lectures from TAUS’s Dace Dzeguze and Jaap van der Meer focused on TAUS DQF and the translation technology landscape and addressed questions like, “Will there still be a need for human translators in the future?” and “Will there be separate professions going forward, such as the translator and the post-editor?” We will probably only know the answers to these questions once these developments have taken place; however, today’s translators are already different from translators 10 years ago. More skills add to the diversity of the profession and might also cause some translators to specialize in certain areas such as transcreation, which will probably always have to be done by humans. There will be different niches, though one thing is for certain and this the content to be translated (in whichever form) will most certainly continue to increase.

The first day then concluded with two additional industry use cases, one from Rebecca Bartolozzi, Machine Translation Language Specialist at eBay, and one from myself explaining Welocalize’s approach to MT and PE and giving a few tips for future translators and post-editors.

Day two was focused on MateCat with Alessandro Cattelan and Annalisa de Petra providing an introduction and demo on MateCat and its project management, post-editing and data analysis capabilities. The rest of the day was spent taking part in practical training sessions and a certification exam. We only had 30 minutes, so it was work under pressure and from what I know, everyone passed!

Overall, SATT 2016 was a great event with a lot of great contributors and attendees from various areas. The atmosphere was very relaxed and personal, yet professional, which I really liked. FBK, keep up the great work, and when you announce SATT 2017, I’m in!

Tanja Schmidt

Welocalize MT Program Manager, Technology Solutions

Welocalize Presents at the School of Advanced Technologies for Translators 2016

Frederick, Maryland – September 8, 2016 – Welocalize, global leader in translation and localization solutions, is proud to sponsor and present at the upcoming School of Advanced Technologies for Translators (SATT) 2016 taking place in Trento, Italy on September 9-10 at the Scientific and Technological Hub of Fondazione Bruno Kessler.

SATT 2016 is a two day educational program designed to provide professional translators and teachers knowledge, experience and resources about the translation industry.  Presenters will discuss topics and trends related to machine translation (MT) and post-editing technologies.

Tanja Schmidtanja_schmidtt, Welocalize MT program manager and member of Welocalize Technology Solutions Team, is a featured speaker at SATT 2016. She will share expertise and knowledge in her planned industry case presentation, which focuses on Welocalize’s approach to MT.

“SATT 2016 is an important education program for professional translators to keep up-to-date with how the latest language technologies are being developed and utilized in real-life, commercial solutions,” said Olga Beregovaya, Welocalize’s VP of language technology solutions at Welocalize. “Welocalize works with many global brands, integrating MT and post-edited MT programs into localization strategies with great success. We look forward to sharing our MT experiences and innovations at SATT 2016.”

The SATT 2016 will teach professional translators, language service providers and translation educators the basics of MT, as well as the relevant aspects related to its application to post-editing from academic, professional and commercial perspectives. For more information and to register, visit

Welocalize, Inc., founded in 1997, offers innovative language services to help global brands reach audiences around the world in more than 175 languages. We provide translation and localization services, talent management, language tools, automation and technology, quality and program management. Our range of managed language services include machine translation, digital marketing, validation and testing, interpretation, staffing and enterprise translation management technologies. We specialize in consumer, technology, manufacturing, learning, oil and gas, travel and hospitality, marketing and advertising, finance, legal and life sciences industry language solutions. With more than 1000 full-time employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Spain, Italy, Romania, Poland, Hungary, Japan and China.

UGC in Modern-Day Localization

Colorful earth on the gray backgroundOver the past years, the importance of user-generated content (UGC) in global marketing programs has steadily increased to what some classify as overwhelming volumes. More and more consumers post reviews about products and services online. Global brands are more accessible than ever, having their own Facebook and Twitter accounts, thus making it easy to get in touch with the companies themselves. These networks and communities are coming together centered on one purpose, for exchanging opinions with other customers to share and react to recent posts. And this is happening in a global exchange.

Today, companies are confronted with a vast amount of multilingual content at their disposal. Whether it is to promote further sales, initiate remedial measures in case of negative feedback or get to know more about their customers’ needs – no global company would want to leave this source of knowledge unexploited. The fact that it involves so many different languages naturally has turned UGC and network generated content into big topics within the localization industry.

What does UGC involve? First of all, a massive amount of “data” that is available for localization, primarily due to globalization as a whole. Second, the content is likely to be orthographically and grammatically incorrect and contain a lot of Internet slang (most famously acronyms like “LOL” for “laughing out loud” and the like). Both are challenges.

What does this mean for localization? There is already a lot more content out there than could ever be localized by human translators. Also, even big global players usually do not have enough budget to get all this content fully localized, nor should they. Moreover, why would someone want to spend an incredible amount of money on localizing something that was not considered nor intended to be perfect in the first place and usually has a very short life span, with someone posting something new five minutes (or seconds) later?

Machine translation (MT) can help. Large amounts of data can be localized fast and cost-effectively, to the expected level of comprehension and understanding for this type of content. Confidence scoring helps to assess “understand-ability” automatically, to then decide whether the raw MT is published or not. Content that did not meet the “pass”threshold can either be discarded completely or fed back into a post-editing cycle, depending on its importance. If there were slightly higher expectations right from the start, a certain level of post-editing could be included in the process by default.

With weMT and weImpact featuring DQF, Welocalize acts upon such customer unique requirements and provides customized quality models based upon variables such as content objectives and budget. Depending on the client’s requirements around such variables, light, medium or full post-editing are then only some of the flexible options to choose from to ensure UGC is localized at the right scale, quality, budget and timing needs.

Let’s get back to the problem of slang, orthographic mistakes and the like for a minute. Assuming a brand wants to use raw MT to publish UGC in another language, how does MT deal with things like “teh” instead of “the” and “gr8” instead of “great”, for example? Most certainly, MT will not understand them, because it does not know them and thus leaves these words untranslated. Which, in the target language, most certainly will not make a lot of sense to most of the readers. And understand-ability in the target language was the whole point of the exercise! Normalization is the way to go here. During normalization, the text will be automatically scrubbed to correct “teh” to “the” and “gr8” to “great”, which our MT engine will then be able to understand and process accordingly.

Apart from user-generated product reviews and the like, Welocalize is also working together with clients to make MT available in other areas where a fast “gisting” translation is desired. MT for technical support communication and MT as a means to provide a “preliminary” translation until the fully localized version will become available are only two examples of advanced solutions we are offering to our clients. In addition to just providing the raw MT in such programs, we are also working on continuous improvements of the underlying engines, using proprietary human and automated assessments, as well as data-driven engine retraining efforts.

Sentiment analysis (SA), the process of analyzing the “sentiment” of user-generated content to identify whether it is positive, negative or neutral, is another related area we can offer clients support. It complements our portfolio in relation to fast and cost-effective UGC translation and analysis solutions to help global companies make use of important business intelligence.

If you want to learn more about UGC and sentiment analysis, please read our related blogs:

How Sentiment Analysis and MT Can Help You Make Sense of UGC Content

Ten Reasons Why Companies Need Multilingual User Generated Content

Tanja Schmidt

MT Program Manager, Technology Solutions







How Sentiment Analysis and MT Can Help You Make Sense of UGC Content

Global communication group works at a tableUser generated content (UGC) plays a key role in global business, localization and marketing strategies. A growing number of consumers post and share comments and reviews about products, services and brand experiences. Many global brands have realized how valuable it is to harness the power and knowledge of their users and encourage conversations, discussions and opinion-sharing. Global companies like TripAdvisor, eBay, Facebook and YouTube are based on business models that share and rank user opinions.

TripAdvisor, the world’s largest travel site, process over 320 million reviews per month! UGC is often posted in more than one language and a growing area in the localization industry is translating and understanding UGC to monitor what multilingual consumers are saying about their brand and products. This is called social listening.

By gathering and understanding UGC, businesses can use this data to promote further online sales, develop online digital marketing campaigns and provide feedback to product development.

FACT: 25% of Search Results for the world’s 20 largest brands are links to user-generated content. Source: Kissmetrics

One tweet or review can contain facts, tone and opinion that can have an impact on how others see a particular brand. It can be a challenge to collectively make sense, rank and monitor UGC data in the source language, not to mention translate UGC into other languages.

Global organizations often use machine translation (MT), to translate UGC and social media content. MT allows large volumes of data to be translated rapidly to a quality level that is acceptable for this type of content. Once UGC has gone through MT, it is often re-published automatically. As part of this localization and translation process, a growing number of organizations are embracing sentiment analysis as a value-added task to rank source and translated UGC.

Sentiment analysis (SA) is the process of computationally identifying and categorizing opinion expressed in UGC, such as product reviews, social media posts and comments. It provides analysis of the “sentiment” of UGC content, to identify whether it is positive, negative or neutral. On a more complex level, some sentiment analysis tools will break down sections of a review, positive or negative, providing an overall outcome or rating for the piece of text.

The technology behind sentiment analysis is natural language processing (NLP) which focuses on the interaction between computers and language to enable text analysis. As organizations generate huge amounts of online UGC data, sentiment analysis is a key tool to make sense and create valuable business knowledge and intelligence. Working as part of an enterprise MT program, sentiment analysis can assess translated UGC text to enable ranking of multilingual reviews.

Global brands can use sentiment analysis as part of the decision-making process, to decide whether to re-publish and keep certain reviews or UGC data live. Data collected can also be used to help assess the performance of a particular product or service by monitoring overall user feedback posted in social media forums.

Integrating sentiment analysis into an enterprise MT program is an effective way to manage and understand large volumes of UGC in more than one language.  Welocalize has recently partnered with an innovative NLP specialist and is now delivering sentiment analysis and other text analytics services for a range of languages. For more information about sentiment analysis and Welocalize weMT and language tools solutions, email

Based in the United States, Elaine O’Curran is MT Program Manager at Welocalize.

Read more about TripAdvisor and Welocalize partnering together in this case study: 

Managing Effective Machine Translation in an Ever Changing Environment

Recap of LocWorld31 Presentation and Interview with Olga Beregovaya, Welocalize VP of Technology Solutions

155_CreativeFocusIncWelocalize Vice President of Technology Solutions Olga Beregovaya joined Pablo Vasquez from NetApp to deliver a joint presentation on machine translation at Localization World 2016 Dublin. The presentation, “Managing an Effective MT Program in an Ever Changing MT Environment,” generated many thought-provoking discussions for LocWorld31 attendees and those enterprises who use machine translation (MT) solutions as part of their localization program.

Many enterprises tend to stick with their existing machine translation (MT) programs out of fear of the unknown and may have perceptions that there are high risks associated with migrating to a new MT program or adding an MT provider to their engine pool. The LocWorld joint presentation tackled these fears by outlining some of the facts and dispelling fears. The key message was to have an OPEN MIND.

In this interview by Louise Law, Welocalize Communications Manager, Olga highlights some of the primary considerations for enterprises when looking to migrate to the “next best” MT system, along with factors that may drive an enterprise customer to make the migration decision. Click here: LocWorld Presentation Managing MT to see Olga and Pablo’s LocWorld31 presentation, “Managing an Effective MT Program in an Ever Changing MT Environment.”

MT is always high on the list of key topics at LocWorld, what was the overall objective of your and Pablo’s presentation?

Our main objective was to talk to our LocWorld audience about following the innovation in language technology and approaching your MT strategy with an open mind. Pablo and I have worked together on multiple MT implementations and we wanted to share a hypothetical view of what would be involved in changing your MT system. If you want to add an engine to your pool or replace an MT engine, there should be no fear and you should be open minded. We’re talking innovation and disruption. When considering MT migration, many enterprise organizations think about whether they will lose they will lose their existing good work and whether it will deliver immediate ROI. These are all natural considerations but to keep your localization program fresh and ahead of the competition, you need to challenge your existing systems and the MT status quo. You have to have an open mind to succeed.

Why would you change MT systems?

MT engines are continually evolving and improving, delivering faster and better output and operating on more scalable systems. The level of effort from changing from one MT approach to another is now lower than, say, 10 years ago.

There may be performance problems with the existing MT systems. Lower quality output, speed and old high pricing models are some of the main reasons why enterprises look to switch. Clients are looking for innovative approaches to pricing MT. Price per word doesn’t work anymore, nor do expensive annual licensing models. Clients want a pricing structure that reflects level of MT use captures the quality and utility of the MT output.

What are some key considerations for global enterprises when thinking of migrating or adding MT systems?

When you change from one system to another, whether it is changing from RBMT/Hybrid to SMT or between SMT systems, you need to realize that your translators will see new errors, whether coming from the engine output quality or from integration issues.  There can be fewer errors, but they are very likely to be different. .

If you look at a new system, then integration and MT interoperability is extremely important. If you can’t smoothly integrate systems, then this will pose translation and engineering challenges for the translators and post editors. Each MT engine will produce different challenges and the translators and post-editors will notice these. For example, tags and placeholders could be problematic. The team must tune into a new mind set focused on how handle these new error types.

Dictionary support is also key and there are various ways terminology can be supported by an MT system. Will the new system support dictionaries? Especially if a lot of time and money has been spent building dictionaries in the existing system.

Feedback loops are also critical to the success of an MT system. If you don’t have dialogue with engine developers, then they’re essentially developing in the dark. Feedback must be delivered.

Clients are also looking for analytics on engine performance and quality for both predictive analytics and analytics after projects. If you come across a system that helps model post-editor behavior based on predictive analytics, then that engine is going to win over any other basic well-performing MT engine. Consider whether you want analytics as part of your solution.

There is a lot of talk in the industry about connectors and interoperability. How does this impact MT migration?

Connectors and interoperability play an important role in any enterprise MT program. At Welocalize, we pay a lot of attention to how the MT engine is integrated with our TMS solution, GlobalSight. Today, GlobalSight has many connectors into many of the leading MT engines, including: MS Hub, Google, Asia Online, ProMT and Iconic Translation Machines. We have four more connectors in development

We believe strongly in good engineering. While the engine quality is great, with the right connector to seamlessly integrate the system, without messing up segmentation, then any many problems simply go away and the post-editor experience is significantly more pleasant. Put a lot of effort into robust APIs and MT migration and integration becomes low risk and pain-free.

What’s in the pipeline at Welocalize for MT and language technology?

As a sneak preview, we are working on a universal connector. A universal connector allows us to integrate all MT engines into a single piece of middleware which allows us to expand our engine pool. We don’t want to reinvent the wheel for each client. One universal connector fits all. You simply drop into a client’s environment and customize as and when is needed.

Also, just like almost everyone involved in machine translation field, we are also experimenting with neural MT and the results are very encouraging.

Do you have one piece of advice for anyone looking to change the MT system?

weMTConsult an expert before making a big change. If you talk to a global provider like Welocalize, we can provide AB testing and analysis of existing and proposed MT systems. We can help with solid analysis and make implementation recommendations.

One final word of advice, don’t get too carried away with industry buzzwords and breakthrough MT technologies. Make sure your engine is implementation and production ready. Use something that has been tested commercially.

Click here to view LocWorld31 Presentation: Managing an Effective MT Program in an Ever Changing MT Environment.

For more information on Welocalize weMT solution, click here.

Welocalize Discusses MT and Quality at 2016 TAUS Events in Dublin

Frederick, Maryland – June 3, 2016 – Welocalize, global leader in innovative translation and localization solutions, will be hosting and participating in panel discussions on hot topics relating to translation automation, machine translation (MT) and translation quality at the TAUS Industry Leaders Forum, June 6-7 and TAUS Quality Evaluation (QE) Summit on June 8. Both events take place in Dublin, Ireland.

At the TAUS Industry Leaders Forum, Welocalize VP of Language Technology Solutions Olga Beregovaya will host a session and panel discussion on Tuesday, June 7, at the Clontarf Castle Hotel in Dublin. The session, “Moving from Cro-Magnon Era (Thinking) of Language Services,” focuses on the topic of innovative business and pricing models in translation and will challenge the long-established ways of viewing and pricing translation and localization services.

The TAUS Industry Leaders Forum is a unique gathering of business leaders and experts in internationalization and globalization from global organizations who will join in a dialogue with executives and key leadership from the world’s largest Language Service Providers. The TAUS Industry Leaders Forum focuses on common issues and challenges for creating more efficient translation processes.

“The way we view translation automation and efficiency has evolved significantly over the past years and machine translation now provides global organizations with commercially robust solutions for meeting quality levels and increasing efficiency in the translation workflow,” said Olga Beregovaya, VP of language technology solutions at Welocalize. “TAUS events are always excellent forums for sharing best practices and advancements in language technology solutions and TAUS discussions contribute significantly to our moving forward as an industry to help clients publish more multilingual content at their desired quality levels.”

TAUS_member_markWelocalize is a member sponsor of TAUS. For more information about TAUS, visit

About TAUS – TAUS is a resource center for the global language and translation industries. Our mission is to enable better translation through innovation and automation. We envision translation as a standard feature, a utility, similar to the internet, electricity and water. Translation available in all languages to all people in the world will push the evolution of human civilization to a much higher level of understanding, education and discovery. We support all translation operators – translation buyers, language service providers, individual translators and government agencies – with a comprehensive suite of online services, software and knowledge that help them to grow and innovate their business. We extend the reach and growth of the translation industry through our execution with sharing translation data and quality evaluation metrics. For more information about TAUS, please visit:

About Welocalize – Welocalize, Inc., founded in 1997, offers innovative language services to help global brands reach audiences around the world in more than 175 languages. We provide translation and localization services, talent management, language tools, automation and technology, quality and program management. Our range of managed language services include machine translation, digital marketing, validation and testing, interpretation, staffing and enterprise translation management technologies. We specialize in consumer, technology, manufacturing, learning, oil and gas, travel and hospitality, marketing and advertising, finance, legal and life sciences industry language solutions. With more than 800 full-time employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Spain, Italy, Romania, Poland, Japan and China.

Neural Machine Translation is the Next Big Thing

BulbWelocalize Senior Computational Linguist, Dave Landan, writes about the trends in machine translation (MT), neural machine translation (NMT) and takes us through the evolution of MT. He shares insights on how Welocalize is using cutting-edge innovation and technologies in its language tools solutions and MT programs.

It’s been almost nine years since Koehn et al. published Moses: Open Source Toolkit for Statistical Machine Translation1  in 2007, which fundamentally changed the way machine translation (MT) was done. But this was not the first fundamental shift in MT, and it looks like it won’t to be the last. To ensure our clients receive world-class levels of innovation in the area of language technology, we are working with what we are pretty sure will be the next big thing in MT. More about that to follow, but first a little context about how MT has evolved.

Brief History of MT

The field of MT began in earnest in the 1950s, first with bilingual dictionaries that permitted only word-by-word translation.  Translations by this method are seldom fluent. They are easily tripped up by polysemous words which are words with more than one meaning like “bank” or “Java,” and are often very difficult to understand by someone who doesn’t know what the intended meaning is beforehand.

From this beginning, the Next Big Thing was the introduction of rule-based machine translation (RBMT).  First there was direct RBMT, which used basic rules on top of the bilingual dictionaries.  Those helped with word order problems, but still didn’t address the other problems.  Next, we saw the introduction of transfer RBMT, which added more rules to deal with morphology and syntax to address those problems.  These systems can give performance that is quite good, but because of the richness of language, the systems are often incomplete in vocabulary coverage, syntactic coverage, or both.  RBMT is also expensive because it requires humans (linguists) to write all the rules and maintain the dictionaries that the systems use.  Still, due in part to the high cost of computing resources, RBMT dominated the field between 2000 and 2010.  There are still companies that offer good RBMT solutions today, often hybrid solutions combining RBMT with SMT.

Statistical Machine Translation (SMT)

Thanks to increased computing power at a lower cost and some pioneering research from IBM around 1990, work on statistical machine translation (SMT) began to take off in the late-1990’s and early-2000’s. In 2007, Moses was earmarked as the next big thing in MT; however, it wasn’t until 2010-2012 that it became the foundation upon which nearly every commercial SMT system was based.  SMT shifted the focus from linguists writing rules to acquiring aligned corpora, which are required to train SMT systems.  SMT has limitations as well. Language pairs that have different word order are particularly tricky and unless you have vast amounts of computing resources, modeling long-term dependencies between words or phrases is nearly impossible.

There have been incremental improvements to SMT over the past several years, including SMT using hierarchical models, and the introduction of linguistic meta-data for grammar-informed models. Nothing has come along that had such a huge impact as the jump from word-by-word to RBMT, or from RBMT to SMT, until now.

Neural Machine Translation (NMT)

Over the past two years, researchers have been working on using sequence-to-sequence mapping with artificial neural networks to develop what’s being called neural machine translation (NMT).  Essentially, they use recurrent neural networks to build a system that learns to map a whole sentence from source to target all at once, instead of word-by-word, phrase-by-phrase, or n-gram-by-n-gram.  This eliminates the problems of long-term dependencies and word-ordering, because the system learns whole sentences at once.  Indeed, some researchers are looking at extending beyond the limitations of the sentence to whole paragraphs or even documents. Document-level translation would theoretically eliminate our need for aligned files and allow us to train on transcreated material, which is unthinkable in any system available today.

NMT has shortcomings as well. Neural networks require a lot of training data, on the order of one million sentence pairs, and there’s currently no good solution to translating rare or unseen words and out of vocabulary (OOV) words.  There have been a few proposals on how to address this problem, nothing firm yet.  At Welocalize, we’re actively pursuing ideas of our own on how to fix the OOV problem for client data and we’re also working on how to overcome the amount of client data necessary to train a good NMT system.

The other major shift is that in order to train large neural networks efficiently, this requires a different set of hardware.  SMT requires a lot of memory to store phrase tables and training can be “parallelized” to work better on CPUs with multiple cores.  NMT on the other hand requires high-end GPUs (yes, video cards) for training.  We’ve invested in the infrastructure necessary to do the work and we’re working hard to get this exciting new technology ready for our clients to use.  Our early results with a variety of domain-specific data sets are very promising.

We’re not alone in our excitement. Many talks and posters at MT conferences are dedicated to advancement and progress in NMT. Google and Microsoft are both working on ways to use NMT in their translation products, with a special interest in how NMT can significantly improve fluency in translation between Asian and European languages. Watch this space in the weeks and months to come for updates on our progress with this exciting technology.


Dave Landan is Senior Computational Linguist at Welocalize. David.Landan@welocalize

Welocalize is a bronze sponsor at EAMT 2016. Click here for more information.

Read Welocalize & Trend Micro MT Case Study: MT Suitability Pilot Shortens Translation Times & Reduce Costs

1 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, Prague, Czech Republic.


MT Suitability Pilot Shortens Translation Times and Reduce Costs

A Welocalize and Trend Micro Case Study

trendmicro.com__0A global leader in IT security, Trend Micro, required a localization program which would enable higher volumes of technical content to be translated at speed without compromising quality.

In 2014, Trend Micro operations in Asia set out to shorten translation times to enable more translation volume and to reduce costs. The company has many agile projects that require a localization program that matches development speed, quality and provides technical content in more locales.

To investigate content suitability for machine translation (MT), Welocalize proposed an MT pilot that ran throughout 2015.

Welocalize and Trend Micro conducted the MT Suitability Pilot for technical documentation and UI content for certain Trend Micro SaaS products. For the MT pilot, the team identified three languages to test – French, German and Simplified Chinese.

READ MORE: Welocalize Trend Micro Case Study

weMT Approach

  • The Welocalize Technology Solutions team proposed an MT evaluation for the three test languages; automatic scoring, human evaluation and post-editing productivity tests, using two selected MT engines.
  • An MT pilot KPI Scorecard was developed to compare the two MT systems, per language, providing analytics on engine performance, language quality and content suitability.
  • Welocalize created two MT systems per language pair, trained using a common Trend Micro corpora. One MT system was an in-house Moses-based system, the other was built using a customized version of Microsoft Translator Hub.
  • Both MT systems were set up specifically to handle the complex Trend Micro content.

MT Evaluation Process for Trend Micro

  • Automatic Scoring systems gave an indication of the quality of the MT output, evaluating output that correlates with human judgement. Automatic Scoring systems used for this MT pilot were BLEU, F-Measure, TER, METEOR and GTM.
  • Human Evaluation gave insight into the quality, adequacy and fluency of the content, from a linguistic perspective, for each MT engine and each language pair. For the MT pilot, error annotation helped to identify most frequent issues found in the raw MT and to improve MT output in future rounds of MT engine training.
  • Productivity tests were performed with two experienced Trend Micro translators per language, following the usual QA process. This was to evaluate the productivity gains when moving from human translation to MT post-editing. The translators produced real-time post-editing productivity metrics for translations, provided by both MT systems.

All three language pairs showed good results, validating the proposed MT solution as a good fit for Trend Micro’s content.


  • MT pilot demonstrated significant time and cost savings. Trend Micro reduced translation turnaround time (TAT), and subsequently time to market by 15%.
  • Productivity gains from post-editing alone led to an overall TAT improvement of 15%.
  • Quality of the MT post-edited projects matched quality of human translation.
  • Positive results led to further MT deployment at Trend Micro.

“Because of the success of Welocalize’s MT Suitability Pilot, we have verified that MT is workable in our organization. The overall process is smooth and we saw that MT can save around minimum 15% of translation time even for the most challenging translations, especially with high volumes. With high volumes, we also recognized translation cost savings and quality evaluation (QE) was good, especially when we introduced post-editing into the process for projects with higher word counts. Welocalize is very professional and customer orientated. We are moving forward with Welocalize to expand the MT program, increasing content types and number of languages.” – Di Wang, Manager, Trend Micro Research & Development

Key Highlights 

  • Three languages piloted: French, German, Simplified Chinese
  • MT evaluation approach: Automatic Scoring, Human Evaluation & Productivity Testing
  • Two MT engines, MS Hub and weMT Moses, customized for content type
  • Experienced post-editors
  • MT engine performance stats based on locale
  • Compliance with Trend Micro Style Guides
  • Terminology Management
  • Scalable, flexible weMT program
  • Global teamwork & world class customer support

Global brands trust Welocalize with designing and executing technology-driven language programs. For more information on Welocalize weMT programs, click here.

Find out more about how Welocalize helped Trend Micro reduce translations costs and increase volumes. Click here to read the full Welocalize Trend Micro Case Study.

Trend Micro Case Study_FINAL_Page_1

Spotlight on GlobalSight 8.6.7 New Release

GlobalSIGHT-colorWelocalize recently launched an updated to the open source translation management system (TMS), GlobalSight. Details on the release are noted here: Welocalize Releases GlobalSight 8.6.7.  In this blog, Senior Software Engineer at Welocalize, Andrew Gibbons, guides us through some of the key features and benefits of this latest GlobalSight release.

After almost a year of hard work, the Welocalize Development Team presented the new public release of GlobalSight, 8.6.7. This latest version includes a host of new features for the GlobalSight community, including expanded connectivity, improvements to UI, major new online review options and updates to core components.

We have also completed a number of “under the hood” improvements to GlobalSight.

  • Java client dependency removal
  • Updated JBoss version
  • Updated database connector enabling MySQL database update to 5.7.11

Java Client Dependency Removal

Preparing the ground for Java client’s deprecation, we have removed client-side Java dependency for most functionality. Most functions that previously used Java are seamlessly replaced with non-Java equivalents. The Java CreateJob function is permission switchable to the non-Java version. We aim to remove all client side Java requirements by next public release.

Updated JBoss Version

We have updated JBoss application engine to EAP 6.4.0 (AS 7.5) which is compliant with Oracle Java 1.8. If the GlobalSight admin is updating from a previous install, the Java version will need to be updated as well. Note that each JBoss version works best with its corresponding Java version.

Updated Database Connector

We have also updated the MySQL connector and tested with MySQL Database 5.7.11, as well as improvements to the database connection pooling.


There are now more systems that connects to GlobalSight 8.6.7. These include:

  • Blaise
  • Git (stash on top)
  • Eloqua Dynamic Contents Support
  • COTI level 1.
  • Drupal 8 (needs plugin)
  • AEM (CQ5) (needs plugin)

Welocalize can supply plug-ins for connectors requiring them. Connectors are enabled with permissions.

Other GlobalSight 8.6.7 Key Features:

XLIFF 2.0: XLIFF 2.0 now supported as a source format, implementing core and translation modules. Also available as an offline file format for importation into other translation tools, implementing core.

Online Review: We have added a new online review tool that can give previews of Adobe Creative Cloud® and Microsoft Office® 2010 files. Right click on the source file link to provide display options. Note that these would need their individual licensed programs to work.

Machine Translation (MT): We identified a number of use cases for the use of MT, specifically, gisting and post editing. In GlobalSight 8.7.6, any MT is automatically populated to the target segment. The GUI to manage the MT has been updated, and we’ve dropped the properties file method. There are new methods to re-try to hit the MT engine if first attempts fail.

Minor Enhancements

We have made some minor enhancements like moving the download off-line kit button to under the kit options. We’d observed that for large kits, there was the task of scrolling to the bottom of the page to click on the download button. Small usability changes like this can help speed up translator throughput.

Other usability changes include:

  • Saving the job id and job name to the TM segment. This is useful for tracking down who did what and when.
  • Changed attribute handling.
  • Advanced search options for online TM search.
  • AuthorIT String id handling. We have included additional TM matching information to harvest the AuthorIT String ID for each translatable segment from the AuthorIT localization kits.
  • Ability to recreate a job. This is where a job may come in from a connector and it is important to keep the same job id and metadata for export. Such a job occasionally fails due to bad files. After debug and repair, we would want to ensure that the job metadata remains the same and be able to recreate the job with the fixed files.
  • Ability to export a ready job. This is very handy for checking a file before sending it for translation.

GlobalSight 8.6.7 release notes, including product documentation, bug trackers and forums are available online at The new version of GlobalSight is available to download at  If you would like a demonstration of GlobalSight or to discuss any of the features, please let contact us today.


Based in Dublin, Andrew Gibbons is a senior software engineer at Welocalize, specializing in GlobalSight.

Discussion of Software Localization and Testing with Derek McCann

Derek McCannDerek McCann is Chief Customer Officer for North America at Welocalize. Derek joined Welocalize last year from Microsoft where he worked for 23 years. His latest position at Microsoft was Senior Director Internationalization and Localization, holding senior responsibility for the Microsoft Windows localization program. In July 2015, Derek launched Microsoft Windows simultaneously in 110 languages. In this interview, Derek speaks with Welocalize Global Communications Manager, Louise Law, and shares some of his valuable insights into the world of software localization and testing.

What is the state of the global software market today?

There are a lot more companies involved in software production and development. It’s dawning on a lot of global companies that producing products that are locally relevant is key to success and competitive advantage. Simply localizing into small number of main languages is no longer enough. To be a quality and competitive software company, you have to launch in more languages. Breadth is driving the industry and software has to be ready for the world.

Customers are more demanding now because they have a lot of choices and options. The millennial generation will work with something and if they don’t like it, they’ll drop it and find something else similar. They have less allegiance. For software companies to differentiate themselves, they have to be present in many markets and this means worldwide products need thorough localization. Today’s digitally savvy software users want technology in their own language and demand an experience that meets their needs and culture – an experience that is locally relevant.

What are some of the challenges with software localization testing?

Timeliness and agility are top challenges. In the old days when you had big software launches every 2-3 years, there was time. Now, products are updated monthly, weekly and are constantly being worked on, with new features being sent out to users all the time. You must be agile in localization. Small packages of content released rapidly and continuously into the translation workflow is the new velocity in the technology space. Your localization QA and testing cycle, what we often refer to as validation, has to mirror that agility. All locales have to be tested and released simultaneously, for every feature and fix. Welocalize provides validation services for many large scale technology providers. It is essential to have validation and review a part of the overall workflow. Failure to validate is not an option.

Local relevance drives any software localization program; however, it doesn’t mean just linguistically translating software. It includes translation, cultures and other geopolitical factors. How is a product used in certain markets? Local laws and traditions will impact how a product is used. For example, if your software has purchase instruments and is processing financial transactions, then it has to be locally adapted to manage different credit cards and government regulations for certain territories.

Any font or display of text has to be considered. Is it right to left or left to right? Some complex scripts require further adaptation at localization. A lot of localization issues can be resolved at the development and engineering stage, before testing starts. Those who develop the original software need to make sure it presents in all fonts at the development stage. If it is destined for Asian markets, it needs to be able to present complex Asian characters.

Software functional and linguistic testing is not just about words. It should validate how the product is configured to cater for all local markets. A good software localization QA and testing program will fully understand world readiness scenarios and have regional insights on all users. Culture impacts how people think and use software.

How do software companies ensure the same levels of user satisfaction across all local markets?

Companies must understand how their products are being used in different markets. They must listen to the user voice. Users are unique and will use software in different ways, often depending on where they are based. Technology organizations need continual user insights to gather intelligent information, know how the product is being used and prioritize features. For example, is a feature not functioning properly or is it simply not designed appropriately for some local markets?

We are driving more and more social media listening and monitoring. The use of machine translation (MT) can help to understand feedback in all languages. Through MT and post-edited MT, you automate the translation workflow and keep user data flowing. Social media and forums contain invaluable user information that organizations can gather and react to. Customer insights are invaluable. They enable you to understand how your product is being used or if it is failing in certain areas. If Chinese users are mocking certain product features on a Chinese forum, then you need to understand what is being said and act accordingly.

How will global technology companies succeed in the future?

They will become learning organizations. Nothing stays the same in technology so you have to be agile, listen to your customers and constantly build that learning into your development, localization and test programs. As we learn, we can scale and automate and continually deliver the right customer experience to all local markets.

Welocalize provide validation services for testing and QA.  To learn more, visit

Six Expert Insights on E-Commerce Localization

Launching global e-commerce sites is a relatively fast and effective way to reach new markets, compared to the traditional brick and mortar retail business models. Online retailers know that adding language sites, with the right delivery and support infrastructure, helps to expand market share and grow revenue. Trading online can also have its share of challenges and risks. Your valuable brand needs global representation if you want to maximize sales. You want to properly invest in knowing your buyer in each target market, including language preferences. This begins by evaluating the relevant marketing psycho, socio and demographic details of your target consumer in order to gain brand awareness, consumer engagement and revenue growth in each market.

Online commerce is a highly competitive market and online consumers can be fickle. With so much choice, they often lack loyalty and will think nothing about switching brands if you don’t deliver your “brand” promise. Because e-commerce giants have so much buyer power, if you are competing purely on price, then you have a challenge on your hands. You need to differentiate your e-commerce by creating a personal and unique online experience and a good localization strategy can help you achieve that by speaking the language of your target buyer.

To reach international markets, successful e-commerce goes way beyond simply translating a website. There are many different factors that any new or existing e-commerce organization can consider as part of their strategy:

INSIGHT ONE: Have localized knowledge on online spending habits

Knowing the demographics and culturally preferences of your target audiences is crucial. As is knowing when popular online shopping days take place. You can then time online promotions and pricing models accordingly and if necessary, increase delivery operations if increased demand is expected.

The Thanksgiving holiday (celebrated in the US on fourth Thursday in November) and the following “Black Friday” and “Cyber Monday” days at the end of November are days where online retailers cut prices to encourage mass spending. E-tailers make the most of the fact people are not at work and are looking for bargains in the run up to Christmas. In 2015, online shoppers spent $4.45 billion online on Thanksgiving Day and Black Friday. This surge in online shopping also takes place in the UK. Shoppers spent a record £1.1 billion with UK online retailers on Black Monday. UK retailing giant, John Lewis, said Black Friday was its biggest day of retailing.

China has a similar day, known as Singles Day. It is one of the largest online shopping days in the world. Sales on Alibaba sites in 2015 reached $14.3 billion. Take advantage of shopping days around the world.

INSIGHT TWO: Develop and localize a mobile app

M-commerce is outpacing e-commerce three-to-one ( According to a report by PayPal, mobile accounts for 20% of its overall purchase volume worldwide. 33% of online shoppers say they’ve used a smart phone to make a purchase. In a recent article on, PayPal’s director of mobile commerce warns retailers that mobile payments should be a top priority to provide the experience consumers want for shopping online.

Any online retailer must consider purchasing or developing an app to enable mobile purchase for all languages and local markets. All web, product, marketing and customer support information has to be readable and accessible in all target languages for the relevant mobile platforms.

INSIGHT THREE: Awareness of trading laws and local regulations

Localization of e-commerce is not simply translating the website into another language. International and local trading laws and regulations. For example, taxes, product returns and refunds, financial transactions, currencies must all be localized. Any purchase instrument, which acts as an important part of the e-commerce site, must be able to trade with local currency and process whatever credit or debit cards are used for each country.

INSIGHT FOUR: Build localized digital marketing campaigns for each market

Rolling out a global digital marketing campaign does not mean creating one campaign then just translating words. Build individual campaigns from scratch. Directly translating existing campaigns will not work. Marketing materials like PPC and banner campaigns need to be recreated to meet cultural differences and the different online consumer habits. This approach applies to online search-ability. Simply translating keywords won’t get you discovered. Multilingual digital marketing requires knowledge into local buyer behavior and how local shoppers think. What words will be keyed into which search engine? Someone in China will go to a different search engine than someone in the US and both will use different keywords to search for the same item.

INSIGHT FIVE: Social media listening and localization of UGC

The e-commerce model is pretty much driven by consumer reviews, ratings, social media posts and forum discussions, what is defined as user generated feedback (UGC). Online consumers have so much information at their fingertips, they can read product reviews by people all over the world – good and bad. Today’s savvy online shoppers will be vocal about the e-commerce experience. Online retailers can benefit from understanding what is being said about their service and products. Machine translation (MT) can help quickly translate high volumes of content so e-tailers can be aware what people are saying about them and act accordingly. This information can be used to improve product and the overall online experience. Social media monitoring enables global organizations to continuously develop, learn and evolve the online shopping experience.

INSIGHT SIX: Website localization

The main landing page is very important to any e-commerce organization, including the domain name. Online consumers want to see, .com, .fr, .cn when they land on the site. Having a country specific domain is a key part in the overall localization process.

For the main landing page, the content on the homepage is very valuable and must reflect local tastes and habits. For example, big retailers like Amazon will often promote top-selling electronic products on their main Japanese page, because this is the main product group people in Japan are searching for. The landing page for Amazon in the UK or Amazon in the US will differ, depending on current tastes and trends.

Establishing a globally recognized e-commerce brand involves many strategic marketing decisions about localizing websites and UI , executing multilingual digital marketing campaigns and intelligently analyzing social media posts to drive influence. Many of these activities are best achieved by teaming up with a strategic localization partner to provide expert insights, helping guide you to success. Welocalize experts work with many leading e-commerce brands, to create a truly global strategy that feels local and personal to the individual online shopper. Welocalize recently announced the acquisition of Adapt Worldwide, a multilingual digital marketing agency, to enable global brands to reach online consumers in multiple digital channels. Click here for more information.


Louise Law is Global Communications Manager at Welocalize.






Welocalize Translates More than One Billion Words a Year


It started with one word in 1997, the year Welocalize was founded by Smith Yewell. One word that initiated a chain reaction, advancing the way translation services are delivered to global brands. That word was Pathfinder. Today, Welocalize manages more than 1.2 billion words a year, 100 million words a month, that’s 3.3 million words per day on average. Welocalize filled requests for 400+ language pairs last year for some of the world’s largest global brands. To put that into context, the average person speaks 123,205,750 words in a lifetime, according to The Human Footprint. That means you would need to be reincarnated 8.1 times to speak 1 billion words!

One of the most popular free consumer online translation tools translates the equivalent of 1 million books a day and uses a technique called statistical machine translations where a database is fed with millions of human translated documents and an algorithm then finds patterns. While online tools are an excellent product for anyone who is online and requires a quick translation to simply understand the “gist” of multilingual content, those that trust the language be “right” in representing their brand and content depend on qualified language service providers like Welocalize.

For growing multinational organizations and businesses trying to reach a global audience, formalized processes for translation and localization are fundamental in doing business around the world.  Whether it is supporting a multilingual digital marketing strategy or providing continuous compliance training to a dispersed global workforce, a more in-depth, tailored and sophisticated approach that can manage complex workflows is required for success.

Content types and languages start the decision process for language service buyers. The range of language service requirements may be simple “gisting” delivered by machine translation for large volumes of social media and online forum content to “transcreation” and cultural adaption of marketing and advertising content to drive user acceptance and engage a consumer in market.

One thing that is certain, localization ensures the concept and the facts of a global brand or product is retained, while recreating key messaging and content to suit local audiences and cultures. Literal and linguistic translations are risky when applied for business and generally not acceptable for global marketing.  Taking short-cuts without careful review and qualified language specialists can destroy a campaign and damage to a company’s reputation. Marketers know brand loyalty is earned one customer at a time and transcreation and cultural adaptations of digital marketing messages is key to relating to your target audience.

When skilled human translators translate content, they engage their brain, emotions, life experiences and cultural understanding to adapt a brands content to resonate with the target audience. In the process of translating, linguistic copywriters and translators hear the first version of the work as profoundly and completely as possible. They discover the linguistic charge, the structural rhythms, the subtle implications, the complexities of meaning and suggestion in vocabulary and phrasing, and the ambient, cultural inferences and conclusions. This is a kind of reading as deep as any encounter with a literary text can be (wordswithoutboarders). They take the source content and translate words to create messages that new audiences will understand. To simply provide a straight literal translation is not a good localization and globalization strategy.

Welocalize provides a wide range of localization and translation services in more 175 different languages. We aim to demonstrate our innovation through sharing and collaboration with industry peers, thought leadership organizations and clients. By doing things differently, Welocalize is driving the localization industry forwards, pushing boundaries and breaking down communication and language barriers. It all begins with one word.


Lauren Southers is responsible for marketing automation and global sales support at Welocalize.

What E-Commerce Teams Need from a Strategic Language Service Provider

iStock_000013154457_LargeAt Welocalize, we implement localization solutions for companies and global brands, all from different verticals, and of different shapes and sizes. What do they have in common? The vast majority need to sell their products and service online. We now live in the age of the online consumer, and the ability to learn about, engage with and purchase products and services online is key to meeting customer expectations.

Companies that once were defined as ‘traditional’ store-front retailers now fully embrace e-commerce to the extent that it is core to their business, supporting functions as diverse as sales, product information management, brand marketing, customer service, and crucially, international growth.

The e-commerce team’s function and its role in the dissemination of product, brand and company information is now more critical than ever. With inventory and source content typically managed and channelled by a central team, the localization function for e-commerce provides essential support in supplying the language variants and helping to capture global audiences and revenue.

Here are the top five characteristics that e-commerce teams need to look for when choosing a strategic language service partner (LSP):

SCALE: Large e-retailers need to translate product descriptions for many thousands of SKUs. In today’s world of fast fashion, quick-moving trends and seasonal ranges, there is a constant churn of new content for translation, with initial launches for new languages or seasons sometimes in the millions of words.

Select a partner that has the following capabilities:

  • Ability to support all target languages and locales
  • A super-robust, scalable supply chain for each language
  • Ability to turn around large volumes of content in short time-frames
  • Excellent purchasing power within the supply chain to give you best value for money

TECHNOLOGY & AUTOMATION: In the retail sector, there are high volumes of content, often with short, aggressive time-scales, potentially many individual hand-offs. There are also many participants in the supply chain, including authors, PMs, translators, reviewers with others, and potentially many languages requirements. You will want to automate as much as possible and reduce any manual steps.

Select a partner that has the following capabilities:

  • Connectors to your PIM or e-commerce platform to pick up and deliver the content and eliminate manual hand-offs
  • Workflow automation to automate and accelerate the workflow, from source to translation to review to delivery, incorporating translation memory, glossary tools and review tools
  • Automated validation tools to capture simple errors, support the work of the translators, assure file integrity and avoid corruptions
  • Machine translation (MT) engines, customized for your content, to reduce cost and help with scale, teamed with expert, experienced post-editors
  • A client-facing portal which allows your stakeholders to send in and track ad-hoc requests outside of the standard workflow
  • Potentially proxy or hosted translation solutions to reduce your internal IT footprint

QUALITY MANAGEMENT: Quality, impact and customer experience, is everything. It is important that your brand is properly represented in the target language market and that the experience for the customer is flawless and culturally appropriate.

A strategic LSP should have a transparent and robust quality management system, which they can show you in detail. Their teams to include experienced quality managers who can capture your specific requirements and preferences and assure these are implemented throughout the translation process. LSP’s should be able to define, influence, monitor, measure and control the desired quality levels.

Typically, in large scale translation project things may not always go right, and so the approach and effectiveness of problem solving is important. How your LSP will help the outputs go from good to great is even more important than delivering good sample translations at the beginning.

Select a partner that has the following capabilities:

  • Defined quality management processes for complex workflows
  • Quality certifications based on ISO and industry standards
  • Dedicated language quality review, testing and in-country resources

SEO & MULTILINGUAL DIGITAL MARKETING SUPPORT: Search engine optimization (SEO) is foremost in the minds of most e-commerce decision makers. While much attention is given to SEO strategy for English, often things fall down when you start to scale across many languages.

A lot of money is invested in e-commerce stores and subsequent translations. This investment needs to be followed through with a defined multilingual SEO and digital marketing strategy to ensure performance – do not rely on the central SEO team to support all locales; they will not have the capacity or the linguistic expertise.

Select a partner that has the following capabilities:

  • Research and identify the correct keywords for your target markets, as a direct translation is not enough, you need to use the terms people use when searching in the target country
  • Identify less obvious  keyword opportunities for each target market, there may be easy wins
  • Ensure keywords are correctly incorporated into the content including meta title descriptions and other page attributes
  • Know which search engines are more important for each market
  • Provide technical SEO support
  • Prioritize spend by creating content for the most important landing or category pages
  • Help with multilingual digital marketing upon launch to drive traffic including paid search, ad creation, social media outreach and engagement and link building strategies


Select a team who shares your passion and motivation to succeed. E-commerce today is strategic and central to any retailer’s strategy. It is important your LSP understands the stakes, the risks, the visibility involved and can add value to your team at every step of the way.

External program managers and account managers may end up liaising with many of your internal teams on a daily basis, including: development and IT teams, e-commerce vendors, creative teams, external agencies, site merchandisers and PIM personnel.

Priorities will shift and change, unforeseen issues will arise and, new requirements and complications will abound. Your LSP needs to be flexible, proactive and risk aware and must show they can own and drive the localization roadmap, assuring integration of technology, content and resources across many languages so that you can launch on time. Momentum and urgency need to be maintained all the way through the supply chain. As a client, your needs need to be articulated and advocated throughout the internal teams and functional leads. Use their internal experts including digital media managers, quality managers, solution architects and MT experts to brainstorm, solve problems and create value for your internal team.

A LSP’s knowledge and commitment to integration of technology, content and resources across many languages will mean you will launch your multilingual e-commerce on time, on brand, and with the desired results

robert martinI am interested to know whether you agree with these five characteristics. What else you would add to the list? Please send me your thoughts


Based in London, Robert Martin is Business Development Director at Welocalize.


Twitter: @robert_global




Source Authoring Improves Machine Translation Programs

iStock_000071166341_MediumMachine Translation (MT) is a valuable way to reduce localization costs and get to market faster. MT can also be a complex process with quality issues and excessive post-editing.

MT is fast becoming a significant part of many localization workflows and raw “gisting” MT, post-editing MT (PEMT) and conventional human translation often coexist within the same localization program. At Welocalize, with PEMT we can see between 10% and 100%+ productivity gain, depending on language, content complexity and desired quality level. For many clients, we post-edit MT output for a wide range of content types, including: technical documentation, marketing and training materials, UI, website content, UA and consumer support documentation and user generated content (UGC). One of the areas we look to improve MT quality levels is to see how “MT-friendly” is the source content. Content optimization and pre-editing for MT across a wide range of source texts and styles can be a good solution for improving output, keeping costs down and the volume of publishable content high.

acrolinx logoI recently delivered a joint Welocalize and Acrolinx webinar with Olga Beregovaya, Welocalize Vice President of Technology Solutions, and a number of content software experts from Acrolinx. The joint webinar, New Breakthrough with MT? The Secret is in the Source, shared secrets and best practices on how to optimize source content and increase MT readiness. Using sample data from several domains, we wanted to investigate whether improving the source authoring works and whether source language optimization software improves the overall effectiveness and efficiency of the MT workflow.

Typical MT Output Issues

There are a number of issues associated with MT output, including: capitalization, punctuation, spacing, inconsistent terminology, word order, omissions and additions of text and compound formation. Many of these issues can be controlled and resolved by introducing the concept of “quality at the source.”

Case Study Exercise

In the webinar, we discussed the methodology and results of a case study exercise undertaken by Welocalize and Arcrolinx teams, using MT, PEMT and source content optimization software to addressing the quality of the source content.

We took a number of samples with the approval of our clients, totaling 1000 words, translating content from English into German. Each sample went through customized MT engines for translation. The first time, the source was unedited by Acrolinx technology. In the second cycle, the copy sample was analyzed using Acrolinx technology that analyzed the content and proposed a set of changes based on Acrolinx “writing for MT” rules.

As a result, 52% of the source content was re-authored, based on Acrolinx recommendations to improve the source. Overall, with no source content analytics, MT output required 52% PEMT. After introducing the Acrolinx proposed changes, only 43% of MT output required post-editing. 68% of the re-authored segments produced better MT quality according to human ranking and the PE distance, “how much effort is required to bring MT output to desired quality level” improved significantly by 9%.

By addressing the source quality, the improvement in PE distance translates to:

  • 7-8% productivity gain for translators
  • 5% post-editing discount improvement
  • 5% time-to-market improvement

Adding an additional technology layer to improve the quality of the MT input does improve the overall performance of the MT program. One of the secrets to MT success is to continually train MT engines, resulting in a more intelligent process that keeps volumes high and costs low. This methodology also applies to the source content authoring. The more content is processed through the Acrolinx platform, the more intelligent it becomes in time.

At Welocalize, we are addressing the quality of the source authoring for many of our clients and are partnering with leading authoring tools developers, like Acrolinx, to improve the performance of our MT programs. The webinar generated some interesting discussions and many of the key points we made resonated with webinars attendees. If you have any questions about MT, PEMT or using source language checking software as part of your MT program, please feel free to drop me an email at

I’d love to hear your feedback.

Elaine O’Curran, Program Manager on the Language Tools Team at Welocalize

New Breakthrough with MT? The Secret is in the Source


Post-Editing of Machine-Translated Content — a Welocalize Lecture at Amazon

By Tanja Schmidt

Tanja_SchmidtIn our industry, Welocalize is known as an innovator and leading LSP in the field of machine translation. Since 2012, our weMT programs solve real customer challenges everyday — enabling real return on content investments. Our programs extend beyond providing tools for automated translation of words.

Nevertheless, I was surprised — and excited — when a former Welocalize in-house translator contacted her ex-boss (me) to ask whether I would be interested in giving a lecture on machine translation post-editing at her new employer’s European headquarters in Luxemburg. She had left Welocalize about a year ago to pursue a new opportunity as translator with Amazon, but we never lost contact. As Amazon uses machine translation and post-editing, my former employee remembered the experience the Welocalize in-house teams have with the task of post-editing and thought about organizing this lecture. Amazon is not (yet?) a Welocalize client and, being such a big player, understandably is very strict when it comes to visitors to their offices, which made this even more exciting. The level of trust around this was amazing and I want to thank Amazon for this great opportunity!

When everything was approved and it was clear that we could really go for it, my former employee and I cooperated closely working on the exact content of the presentation because we wanted to make sure that the content is as relevant as possible for what Amazon is doing and planning to do in the future. We had about 3 rounds of tweaking the presentation and, in the end, it was more than worth it. When arriving in Luxemburg, I met some highly interested and motivated people, linguists as well as managers, that were equally interested in getting to know more about Welocalize’s work in the area as well as applying our knowledge to their own setup and plans. They asked a lot of questions, already during the presentation, and I could feel that everyone wanted to make sure to get a chance to pose their questions. We had a very vivid and relaxed atmosphere, so thanks for this, Amazonians!

During the presentation, we covered a lot of topics — from the different machine translation engine types and their pros and cons as well as typical errors, to basic rules for post-editing depending on the setup (i.e. light or full PE), productivity, the different mindsets needed for translation vs. post-editing, and the situation at European universities.

What was interesting to see: People starting with machine translation post-editing all come across the same challenges. As their managers, teachers, or maybe “coaches” altogether, you will be confronted with the same concerns and fears — whether at university, an LSP, or a big player in e-commerce. Ignoring these fears won’t help anyone involved. Instead, be a real “coach”, an advisor — a source of knowledge. Giving your people and their concerns the attention they deserve will help both sides progress in this new and exciting area. With new concepts around neural networks and deep learning, there’s still a lot to come.

Only speaking for myself, I can say that learning more about machine translation and post-editing during trainings and lectures has helped me a lot to work my way into post-editing in the past. So, with lectures at universities and in other surroundings, I intend to provide the same help for other translators working in this field. In turn, with their questions, they provoke new ideas for my daily work and other trainings to come. So, this is a mutually exciting experience and close cooperation across different industry stakeholders and between industry players and universities creates a lot of momentum, so I am happy that Welocalize is very active in this respect and that I am able to participate in this.


Based in Germany, Tanja Schmidt is German Language & Quality Manager at Welocalize.

Topmost Localization Conversations in 2015


As 2015 draws to a close, Monique Nguyen reflects on her year of engaging with business leaders on localization hot topics. She identifies three subjects that have dominated conversations with language service buyers at top global organizations .

Many global brands and organizations have made real progress on their overall globalization and localization strategy. There are new techniques and innovations making the translation workflow more efficient and many organizations are realizing the importance of localization as a centralized function. Sometimes, it is not always easy for localization managers to drive decision making through the organization. Localization can be fragmented within an organization and getting full buy-in and budgetary support from all the right management levels can be a challenge.

Working with localization localization managers and key decision-makers, it is common to find they are driving initiatives to raise the profile of globalization and localization within their organizations. They do this to ensure their internal customers know who they are and what services they provide to the organization. A higher profile localization function also gains more visibility at the C-Suite level, securing buy-in to drive a centralized localization effort and ultimately ensuring a consistent representation of the global brand, content and products across all target markets.

There are a three subjects that created lot of localization and translation buzz in 2015. These are not new topics of interest. They are topics that are common in language professional “circles” and continue to be at forefront of most discussions.


Innovative companies are using cutting edge tools to produce more efficient and effective translation life cycles. Partnering and deploying the right global content platforms add value directly into the overall enterprise localization program. Smart technology means greater access to languages, markets and added business value. Integration of translation workflows into content publishing is a common theme with large content producers, in particular around the predictable nature of communication and branded localization projects.  There is also increased use of really smart tools that will publish multiple languages simultaneously, drawing in from a pool of talent that is integrated into the overall enterprise management system. This is the way forward for enterprise-level translation programs, increasing efficiency of the process and providing better economics for localization programs.


Conversation on the topic of machine translation (MT( have evolved from IF to WHEN. The MT programs we develop for clients are making a significant impact in managing large content volumes and providing considerable savings. They are actively generating usable content to boost the volume of translated content. Many organizations have been translating and localizing content for many years and MT is something that has been discussed at length for a while now. It is now a reality for most organizations. Our MT solutions are being deployed as a natural part of the overall localization strategy in conjunction with other methods. Depending on content type, MT and human translation (whether post-edited MT, straight human translation or transcreation) and working together to deliver high quality content, leveraging the usual TM assets from both disciplines.


Quality is probably the most used word in the globalization and localization industry and rightly so, because it is a number one priority for most of our clients and global brands. A drop in quality, regardless of language, can negatively impact the brand and brand value. It’s no longer a case of achieving linguistic accuracy though, it’s all about customer experience and being culturally appropriate and “in-context” with the content environment and target audience. Translators and reviewers must have access to the right information to ensure content is translated and represented in a natural way and sometimes this means deviating from a straight translation.

This year, I have been lucky enough to work with some leading experts in the industry and help shape the localization programs of many established and emerging global enterprises. I look forward to 2016, as I know it is going to be a busy and progressive year for our industry. What stood out for you in 2015? I’d love to learn more.


Based in San Francisco, Monique Nguyen is Welocalize Regional Enterprise Sales Director for West Coast, North America.

Welocalize Spotlighted at AMTA XV 2015 Summit

Alexy_amta2015Alex Yanishevsky and Elaine O’Curran from Welocalize’s Solutions Team recently presented at the 15th Biennial Conference of the Association for Machine Translation in the Americas (AMTA). In this blog, Welocalize Senior Manager and AMTA presenter, Alex Yanishevsky provides a summary of the AMTA MT XV 2015 Summit.

The AMTA XV Summit in Miami consisted of three tracks: Commercial Users, Government Users and MT Researchers. Elaine O’Curran, Welocalize Program Manager, and I delivered presentations in the Commercial Track. Elaine presented, “MT Quality Evaluations: From Test Environment to Production.” My presentation, “How Much Cake to Eat: The Case for Targeted MT Engines,” revealed the methodology for training domain-specific engines and provided a case study of the efficacy of such engines.

In the Commercial Track, the AMTA Summit provided a shift from previous summits where the focus was primarily on evangelizing machine translation (MT) to a more mature and stable industry stressing repetition, scalability and measurement of processes. Most presentations focused on the three hottest topics currently being discussed in the MT world:

  • Objectively measuring the quality of MT in production
  • Advanced techniques for engine training
  • The advent of neural networks as the potential future underpinning of MT engines as opposed to the current phrase-based statistical engines

elaine2As MT becomes a mainstay in the localization production process, the question of cost and productivity becomes paramount. Many presentations at the AMTA conference focused on tools to evaluate or estimate the quality of the MT engine output to forecast deadlines and discounts, ascertain where post-editors may be over or under editing and providing linguistic feedback for subsequent engine retraining.

More mature clients are going beyond the initial foray of MT with generic, one-size-fits-all-content engines into targeted engines based on product or domain. There were some excellent presentations that focused on the tools and approaches needed to curate content based on dissimilar characteristics and injection of this metadata into translation units to help group TMs for engine training.

Lastly, there were many discussions on neural networks. Neural networks provide great promise for faster and better MT engines in the future, as evidenced by the keynote presentations from Macduff Hughes, Engineering Director of Google Translate, and KyungHyun Cho from New York University (NYU), as well as the inclusion of neural networks into the latest open source Moses toolkit version 3.0.

All the Welocalize AMTA XV Summit 2015 presentations are posted below. If you would like to discuss any of these topics in more details, please contact me directly.


Alex Yanishevsky’s presentation: How Much Cake to Eat: The Case for Targeted MT Engines

Elaine O’Curran’s presentation: MT Quality Evaluations: From Test Environment to Production

Highlights from Tekom Conference 2015

Welocalize recently attended tekom/tcworld 2015 held in Stuttgart, Germany. It represented an opportunity to engage in conversation with attending clients and colleagues, as well as share best practices on the localization of technical communications and documentation. Our attendance at such events is always important for Welocalize, as we benefit by engaging with tekom attendees and industry members to find out what they value in solutions provided by their language services providers.

Here are some of the key highlights from the event:

HIGHLIGHT #1: Automation and Content Management

One of our main findings from the tekom event was how processes and technologies involved in localization are moving closer together. They are becoming more integrated as one process and smooth workflow. Automating processes increase efficiency. It allows us to reduce administrative tasks and time-to-market. It also allows us to reduce the chances of human error and misunderstandings in file transfer and preparation. The content management systems (CMS) that many technical authors work with, must be integrated with the various translation management systems (TMS), terminology and language tools to allow an efficient process and ensure important information is accessible to everyone involved in the translation supply chain.

Welocalize uses its open-source translation management system (TMS), GlobalSight. This is a platform is available to all clients and localization teams, and allows everyone to engage in an automated translation process.

HIGHLIGHT #2: Machine Translation (MT)

MT is becoming more significant in the language services industry. While human translators still play the most important role, many companies use MT to complement and support human translators and enable higher volumes of content to be translated for various content formats. Although high standards are still required for many technical communications, use of MT and posted-edited MT is starting to play a key role. Many Tekom attendees were keen to learn more about how Welocalize weMT and language tools can help the overall localization program.

HIGHLIGHT #3: Terminology Management

Good terminology management is crucial in the translation of technical communications. At Welocalize, we make it our duty to provide the best quality translations for our clients, with a high emphasis on consistency of terminology. Attendees were keen to learn more about terminology management solutions and how these solutions could work for them. Furthermore, 75% of our clients agree that inconsistent terminology causes them the most frustration when translating content. More information can be found in the Welocalize blog Terminology Management for Translating Technical Communications.

HIGHLIGHT #4: In-Context Translations and Content Management

Technical content is highly complex and must be localized to high levels of quality and standards. Global organizations demand that translators possess a thorough knowledge of their product and industry, to ensure good accuracy and content is “in-context.” Having access to relevant product and company information as part of the overall translation workflow is key to accurate and relevant translations and also provides a better working environment for the translators and reviewers.

At tekom/tcworld 2015, we were delighted to speak with clients and attendees and gain insight into the value they see in Welocalize localization programs.  Attendees provided positive feedback related to the fact that Welocalize is very open and transparent in the approach to localization. Deploying innovative tools and technology puts us at the forefront in technical content solutions. Many clients gain great value from the fact that we are willing to work with all tools, including MT and content management systems across a variety of platforms, as we are guided by interoperability. We work with numerous connectors and technologies to ensure our clients have the solution that best fits their unique needs..


Tobias Wiesner, Business Development Director, Germany

Welocalize Discusses Innovation and the Future of Localization at 2015 TAUS Events in North America

Frederick, Maryland – October 1, 2015– Welocalize, global leader in innovative translation and localization solutions, will lead industry discussions at the TAUS Roundtable taking place in Washington DC, October 6, and the TAUS Annual Conference 2015, in San Jose, October 12-13.

“We’re delighted to welcome senior members of the Welocalize leadership team to the TAUS Roundtable in Washington and the TAUS Annual Conference in Silicon Valley,” said Jaap van der Meer, director and founder of TAUS. “The success of TAUS events are based on insights and input from buyers of language services and expert contributions from key global players in the translation and localization industry, like Welocalize. At this year’s TAUS events in North America, we are looking at how we can harness translation data and use innovative technology to predict future workflows, as well as discussing other key TAUS topics like MT, quality and the latest innovations, including the TAUS Quality Dashboard.”

At the TAUS Roundtable in Washington DC, Welocalize CEO and TAUS Advisory Board Member Smith Yewell will present “How to Predict the Future,” where he will outline new ways of using data and predictive analytics for rethinking how localization programs are implemented, quantified and justified today.

“The future of our industry lies in the ability to align localization programs to measurable business outcomes, which we can achieve by using big data and translation automation technology to predict and quantify results,” said Smith Yewell, Welocalize CEO. “We will be sharing our experience and findings at the upcoming TAUS events to help shape the future of localization.”

Olga Beregovaya, VP of Technology Solutions at Welocalize, will be moderating “Let Google and Microsoft Run with It: The Many Uses of MT,” at the TAUS Annual Conference 2015 in Silicon Valley, October 12-13. Her panel session focuses on how machine translation opens up many new markets and brings content to a wider global audience.

Welocalize’s VP of Software Development, Doug Knoll, will also be contributing to industry discussions at the TAUS Annual Conference as a panelist for “Datafication of Translation.”

Olga Beregovaya will be presenting Welocalize StyleScorer at the TAUS Insider Innovation Contest. StyleScorer is an innovative technology, part of the Welocalize weMT suite of language automation tools that provides linguistic style analysis to help streamline translation review software.

As part of the TAUS Annual Conference, Smith Yewell will be demonstrating his musical talents as a member of the TAUS HAUS Band, performing at the TAUS Rock ‘n Roll Dinner, taking place on Monday, October 12 at 6:30PM at The Continental Bar in San Jose.

For more information about TAUS Roundtable visit:

For more information about the TAUS Annual Conference visit:

taus_member_mark_on_whiteAbout TAUS – TAUS is a resource center for the global language and translation industries. Our mission is to enable better translation through innovation and automation. We envision translation as a standard feature, a utility, similar to the internet, electricity and water. Translation available in all languages to all people in the world will push the evolution of human civilization to a much higher level of understanding, education and discovery. We support all translation operators – translation buyers, language service providers, individual translators and government agencies – with a comprehensive suite of online services, software and knowledge that help them to grow and innovate their business. We extend the reach and growth of the translation industry through our execution with sharing translation data and quality evaluation metrics. For more information about TAUS, please visit:

About Welocalize – Welocalize, Inc., founded in 1997, offers innovative translation and localization solutions helping global brands to grow and reach audiences around the world in more than 157 languages. Our solutions include global localization management, translation, supply chain management, people sourcing, language services and automation tools including MT, testing and staffing solutions and enterprise translation management technologies. With more than 600 employees worldwide, Welocalize maintains offices in the United States, United Kingdom, Germany, Ireland, Italy, Japan and China.