by Miguel Gonzalez, Director of Quality Strategy at Welocalize
It is commonly known that not all content is born equal. There are certain types of content, such as technical documentation or support blogs, which are not designed to have an immediate, durable impact on the end-user. The content changes only occasionally, at a slow pace and needs little or no maintenance. On the other end of the spectrum, there are content types, like advertising copy or brand image collateral, whose primary objective is to achieve extreme and measurable impact — fast.
These high-visibility content types are subject to such sustained commercial pressure that they are forced to evolve rapidly to keep current and relevant and to preserve their effective SEO status. The challenges posed by increasingly aggressive competition, fast-changing markets and more demanding and better informed customers, make dynamic content management a key aspect of every company´s global marketing strategy.
Traditionally, the exercise to analyze and classify content in order to decide the best production and quality evaluation methods, as well as the more appropriate talent skillset, has been a one-off. It has also been primarily manual affair, based on the human review of a representative sample of source content. There have been attempts to semi-automate and streamline parts of this cycle; however, as a whole it has been considered a static, pre-production step.
As a consequence of all of the above, conventional quality evaluation frameworks are becoming obsolete as they were devised with only a limited number of discrete content types in mind and took into consideration only manual, labor-intensive production methods, exclusively based on human, professional translation. Take, for instance, a relative recent phenomenon like the dynamic blending of user generated content (UGC) with branded UI content – where does it fit in the existing quality evaluation (QE) models?
This is why newer QE frameworks have started taking into account a broader set of parameters, such as expected impact, production method (whether the content is 100% human-translated, or the post-edited output of a machine-translation engine), distribution channel, projected shelf-life and others, of which can be mixed and weighed in a flexible manner. However, in order to realize the full potential of these new, more sophisticated QE models will need to be automated and seamlessly integrated within the production workflow, along with pre-requisite tools for content type analysis and categorization, and the proper metadata taxonomy.
The integration of the new QE models with existing TMS and CMS systems will allow you to:
- Automatically classify source content by domain and content type, based on a detailed pre-analysis of existing linguistic assets and on a pre-set, yet flexible metadata taxonomy which will drive content categorization and tagging.
- Identify and tag brand new and hybrid content types when they emerge.
- Assess how suitable each content type is for MT and the level of post-editing that should be applied to each of them, by assigning a “readability” index and a machine translatability index that will help determine the production and QE methods.
- Assign pre-set quality evaluation profiles to each content type specifying the best-fitting quality metric or suggest a tailored metric with specific parameters. Both pre-set and tailored metrics can be modified, as needed.
- Channel content through content-aware workflows to the appropriate translator and reviewer with the right skill-set, experience, level of domain specialization and other requirements.
- Flag those source text patterns that can potentially have negative impact on the translators’ productivity, such as unusual syntactic structures, paraphrases, long or compound sentences, as well as positive writing patterns. These patterns can then be fed back to content authors to be taken into account in future; the avoidance of such patterns can be further enforced with the help of content creation assistance tools.
Further integration with a data analytics engine would allow quality and production managers to:
- Automatically generate detailed, ad-hoc quality analysis and trend reports, as well as regular performance metrics with a view to accurately comparing and benchmark translation teams.
- Rapidly identify language-specific areas for improvement and accurately target action plans and QA automation rules.
- Distribute a continued stream of quality trend reports to both customers and translation teams.
Building a system that automatically categorizes content types and aligns them with the right talent skill sets and QE methods will go a long way to addressing some of the more pressing challenges of our industry. These challenges include the efficient processing of vast amounts of diverse content, the continuous gathering and analysis of quality data and performance metrics and the efficient management of an ever-expanding, diverse supply chain. It will also help your company refine its global content management strategy while improving quality performance and return on content.