Languages

Quality Research in Human Translation and Post Edited Machine Translation

SERENA PERUZZI

By: Serena Peruzzi, welocalize

I am a Dublin City University student currently completing my Master’s Degree in Translation Technology. The University, in collaboration with Welocalize, offered the possibility to work on a co-sponsored research project. The aim of the research is to find out whether the final quality between human translations (HT) and post-edited (PE) content differs and if so, how.

The question of how human translated and post-edited content differs is very relevant, as the common perception would be that integrating machine translation (MT) into the translation workflow leads to a decrease in quality. We know that some clients find that using machine translation in fact improves quality!

Before starting my Master’s, I completed my Degree in Translation and Interpreting at Università di Bologna – Italy. There, I carried out an experimental Quality Evaluation of MT output (Google Translate) for my Bachelor’s dissertation, and found it an extremely interesting field of research. I was keen on continuing researching MT and was awarded with the great opportunity to do so at Welocalize.

My dissertation internship lasts three months, June through August. My research focuses on one specific account, which uses statistical machine translation by Safaba.  A third party reviewer carries out quality evaluations, both on human translation (HT) and on post-edited machine translation (PEMT).

I am analyzing translations of one content category into four languages: German, Italian, Japanese and Portuguese Brazilian.  Half of the content translations are completed through human translation, and the other half using machine translation and post-editing. The QA scorecards used by the party reviewers serve as a basis for a statistical analysis to investigate the number of errors found in HT/MT and for each language; most frequent category of errors found per HT/MT and per language; average final score assigned to translations for HT/MT and per language; and number of FAILs, if any.

To sum up, at the end of my analysis I will have found out for each language whether post-editing machine translated output leads to more errors than human translation or not, and what kinds of errors are more frequent respectively including language and accuracy errors, style errors, and mis-translations.

Print Friendly
Facebook Iconfacebook like buttonTwitter Icontwitter follow buttonVisit Our LinkedIn ProfileVisit Our LinkedIn Profile