Perform quick human assessment with Adequacy-Fluency
Deeply analyze MT output with MQM error annotation
Run post-editing “lab tests” for Edit Distance
Most companies implementing Machine Translation run human quality evaluation after they are satisfied with automatic quality metric values — e.g. on shortlisted engines when selecting an engine mix, on newly retrained engines when assessing training results, or on post-edited translations on a regular basis to identify recurring errors that could be fixed with training. ContentQuo Evaluate MT makes it easy, fast, and efficient.
ContentQuo Evaluate MT is not tied to any specific Language Service Provider - you choose your own suppliers! Many LSPs from the Global Top-100 already use our platform. Here are some of them: