Human insights into your MT quality, made efficient.
Gather deep insights on Machine Translation quality with 90% less manual overhead and up to 500% faster, using any LSP or your own in-house linguists (or both)
Vendor-agnostic technology platform for human evaluation of Machine Translation quality at scale with MQM, DQF, and Adequacy-Fluency
ContentQuo for MT is the only purpose-built, supplier-independent, enterprise-grade technology solution for human Linguistic Quality Evaluation that supports both the MQM methodology used by Google researchers (also known as Error Annotation) and cheaper & faster methodologies like Adequacy-Fluency (also known as Rating Scale) in a single environment.
Connect your TMS
Already have your MT engines connected to your TMS? Import any number of pre-translated and/or post-edited files with just 3 clicks.
Mix & match methodologies
Need quick&dirty judgment from your human linguists? Use Adequacy-Fluency or any form of rating scale. Want deep, detailed insights? Use TAUS DQF or any other error typology. Require a hybrid solution combining both approaches? No problem!
No emails to send anymore
No more manual work! ContentQuo automatically samples your MT output, delivers it to your linguists or Language Service Provider for evaluation, makes it easy for them to do the evaluation, and gets you back in-depth reports -- all in just a few clicks.
Get insights across many MT vendors and models
Once you have enough evaluation metrics in ContentQuo, see real-time analytical reports on quality scores, types of errors, and quality trends.
Easy to integrate with your tech stack
ContentQuo's own REST API makes it easy to integrate our platform with the rest of your commercial or proprietary technology stack. Build your own connector to ContentQuo, push new evaluation tasks and fetch metrics in just a couple of API calls.
Human input on MT quality is essential. ContentQuo for MT makes it as fast and cheap to get it as (humanly) possible.
Define key risk tolerances
A user manual for your product might have a slightly rougher flow. However, marketing copy must be immaculate in all cases. Define what really matters.
Set priorities for data gathering
Spread out your quality measurements across vendors and content types. Or focus the efforts on improving just one problematic area of translation!
Assign evaluators automatically
Track the experience, training, and workload of your quality evaluators and assign best matches
Gather metrics on autopilot
Once you defined the budget, the risk tolerances, and the priorities, you can put the data gathering on autopilot - the program will just run itself!
Monitor reports and analytics
By having dozens of ways to slice and dice your translation quality metrics, you have a complete picture of your situation and are fully in charge.
Adjust the strategy over time
If things don't turn out quite as you imagined, tweak any aspect of your strategy - or launch a new one in parallel to see how it compares!