Languages

Beyond Disruption and Recap from Localization World Bangkok

By Olga Beregovaya, Welocalize VP of Language Tools

olga_beregovaya_4Returning from Localization World Bangkok, where I presented “Beyond Disruption,” I have taken time to reflect on some of my conversations with attendees, both fellow language service providers and buyers.

Takeaway #1:  Everyone is looking for ways to redirect their localization and translation investment to where it is needed most. Some are most interested in the full human translation services with transcreation for marketing and brand content. They want a strict adherence to the “voice” of the content.

Outside of your traditional “high pitch” branded content, which is usually passed through the transcreation channel and requires a very special and rather rare local creative talent,  many buyers view some of their user-generated content (UGC) as their brand and marketing assets. Over the recent year, most have viewed UGC as a “lower tier, less professional workforce requirement”.

The terms “voice”, “tone”, “message” now appear in the context of user reviews of the company’s offering and end-user forum content. As I noted in my presentation at TAUS in October 2013, I am convinced that time has come for us to talk about new marketing approaches and pay very close attention to the recruitment, training and process automation for this specific content type. Yes, it may not be suitable for machine translation (MT); however, the brand terminology and style still need to be consistent!

You can get the full presentation by clicking here.

Takeaway #2: There is a strong need to find ways to optimize the costs and the “return” on the other content types whether by using MT or by changing the talent model, or both.

Current global asset management, translation and publishing continues to follow the protocol established at the dawn of the localization industry. With the traditional set of expectations from the CAT (computer aided translation) tools, there is strong reliance on the fuzzy matching, terminology managed and controlled via glossaries and strictly enforced style guides. Not to mention the continuation of the traditional TEP (translation, editing, proofreading) model. Even when MT is deployed on a program, in the majority of cases the same quality metrics are applied and the same quality cards are used.

gI_120505_LW_LogoCntrdGlobeTypeConversations at the conference confirmed the idea that “one-size-fits-all” is not as relevant anymore. The money really needs to be spent where it is required, so the buyer can re-purpose the funds to other, more quality-sensitive content types.

Willem Stoller was talking in his presentation about varied quality levels mapped to the TAUS Dynamic Quality Framework (DQF) platform and mentioned a real-life experiment with quality levels performed by EMC. There were five different quality levels introduced. The levels are defined based on the content purpose. This resonated very well with the similar strategy that we adopted at Welocalize and which I covered in my presentation. The fact remains, there is never degradation in quality, the quality is always good, it is just right for the purpose.

So, how do we implement the concept of return on content (ROC)  in the real world, and in a partnership between a Language Services Provider (LSP) as a “trusted advisor” and a client? After my presentation I spoke to a few attendees that said that while they agree with our approach, they are still doubtful as to whether they will be able to sell the idea to their in-country colleagues. My recommendation has always been,  just let us produce samples for you and highlight the particularities of each of the quality levels! Proof is the quality sample.

Words and presentations don’t always mean much when making your case. Evidence makes the difference. When your counterpart sees an online help chapter written in perfect Spanish without observing the style guide at the same time, they will understand the value. Top that off with the realization that the price tag for translation has reduced significantly, I guarantee they will find your proposition rather compelling.

Takeaway #3:  Buyers are making decisions based on scalable needs, defined by values of content. They want to get the best return with the most positive impact on the business.  Buyers want a partner that will work as a trusted advisor, buyers want help and they want innovative solutions.

An LSP has the unique position of working with many companies at the same time. Client A may be searching for how to get the best value out of their content. Client B may have already deployed a similar process in partnership with the LSP. Or, the clients may have similar expectations and requirements around a specific content type. An LSP can work in collaboration to find the optimal, innovative process that works best for each client. For example, when considering MT – if we have already deployed a lower-cost MT-driven light post-editing solution, all our clients are able to utilize this expertise and knowledge by adapting it to the new set of requirements specific to their content.  It saves time, money and dramatically improves speed to market.

Takeaway #4:  Another thing that seems to resonate well with the audience is that source content analysis and classification are the first step towards content-specific workflow design.

Why did I bring it up in my presentation in the first place? This is the discovery that I made when we started working on several MT post-editing projects where the main content type is user-generated content. The way the post-editor works is by parsing the rather messy source first, reordering it, and then rearranging the target accordingly by changing parts of speech, declensions and word endings as needed. Before embarking on these UGC projects, we performed considerable extensive source content analysis. The findings dictated the MT engine customization methodology, all the necessary pre-and post-processing steps, and  the post-editing task force selection and training requirements. The source content analysis helped us save time and the unnecessary effort of over-editing.

It all starts at the source. Whether to route the content to a transcreation workflow, pass it on to an MT engine for the full post-editing, or opt for the lighter flavor of post-editing, or even direct the LQA (linguistic quality assurance) effort towards terminology, knowing in advance that the terminology is going to be consistent – all these findings can and should be obtained from the source and right at the beginning.

Luckily, there are a lot of tools at our disposal to help. Our collaboration with CNGL (Centre for Next Generation Localization) gives us access to an industry partnership project focusing on source content profiling. In addition, Welocalize has several proprietary NLP (natural language processing) tools which allow us to benchmark the new source against “good sentences” and “they didn’t do so well in translation, let alone MT”.

Localization World Bangkok provided a stunning backdrop to some insightful and engaging conversations with colleagues, clients and friends.  I can affirm, we are moving beyond disruption. We are now ready to deploy. The resources, knowledge and expertise can take you a long way in advancing your decisions and how you can best impact your business.

I look forward to our work ahead and helping you achieve maximum return on content!

Olga

Print Friendly
Facebook Iconfacebook like buttonTwitter Icontwitter follow buttonVisit Our LinkedIn ProfileVisit Our LinkedIn Profile