The Future of Translation Technology in 2020 and Beyond

welocalize October 18, 2019

Insights from the Technology Round Table, European edition

The Translation Technology Round Table, the signature event hosted by the Localization Institute, has always been an excellent opportunity for industry experts to openly discuss the latest developments in the translation technology field and freely debate issues confronting the industry. At Welocalize, we were proud to actively participate in shaping of the program of the first ever European event that took place in Heidelberg in September as members of the Advisory Board represented by Olga Beregovaya (session on Machine Translation) and Sabina Jasinska (session on Technology Marketing).

The overarching theme of the two days of discussions was the future of language technology. Here are some of the highlights:

Translation Management Systems (TMS)

While there are new TMS and terminology management solutions being launched every year, we came to the conclusion that in the next 10 years, translation tools will disappear into other technologies such as Content Management Systems (CMS) as a subfunction of a greater offering. In the meantime, we discussed incremental improvements, for example on how context metadata can be added to translatable content automatically, not via time-consuming human effort, to enable linguists to verify the quality in real time. Still, currently available translation systems focus mostly on passing text, and not on passing context.

“It was not me, it was the machine”

We all hope that artificial intelligence (AI) will free us all from menial tasks and help us focus on value driven activities. With higher automation, however, always comes higher risk. We concluded that the future of language technologies lies in refining quality checking and tools catching mistakes, i.e. statistical solutions for error evaluation. Applications checking content quality, such as Acrolinx and Congree are becoming especially crucial due to the rise of poor-quality source text, often written by non-native speakers or natural language generation tools that have already begun to manifest in many areas of our lives.

In such source text, errors are difficult to detect by a machine as well as translators, who are not native speakers of the source language either. And as the errors multiply, it is risky to assume that the final proofreader or client reviewer will be able to detect them all without help of a technology. As one of the attendees joked, it would be like hoping, in case of a new construction, that if the carpenters did a poor job, the painter will fix it!

“Standards are a picture of how things were five years ago”

Would setting new standards for translation technology (like ISO 9001 for processes or 17100 for localization) help sharing context between applications? Should we revise definitions for full, fuzzy or no matches in translation memory, or penalties when transferring a translation memory (TM) from one system to another? The future, according to the experts, lies in higher connectivity, uberization of payments (based on need and usage and not as a license), therefore requiring standard APIs, such a advocated by a collaborative, community-driven, open-source project, TAPICC. We concluded, however, that while useful, standards are often a picture of how we managed a project five years ago, which quite possibly will not reflect our needs and advances in technology in five years’ time.

What’s the Buzz? Translation Technology Round Table Interviews Olga Beregovaya

We really appreciated the wonderful conversations with other attendees which resulted in clarifications of certain concepts within the translation industry that our company is working on. Also, we highly recommend that you check out the upcoming Translation Technology Round Tables that will be held in the US and in Europe in 2020!