Machine translation Wiki

September 30, 2015
Machine Translation

A typical way for lay people to assess machine translation quality is to translate from a source language to a target language and back to the source language with the same engine. Though intuitively this may seem like a good method of evaluation, it has been shown that round-trip translation is a "poor predictor of quality". The reason why it is such a poor predictor of quality is reasonably intuitive. A round-trip translation is not testing one system, but two systems: the language pair of the engine for translating into the target language, and the language pair translating back from the target language.

Consider the following examples of round-trip translation performed from English to Italian and Portuguese from Somers (2005):

Original text Select this link to look at our home page.
Translated Selezioni questo collegamento per guardare il nostro Home Page.
Translated back Selections this connection in order to watch our Home Page.
Tit for tat
Melharuco para o tat

In the first example, where the text is translated into Italian then back into English—the English text is significantly garbled, but the Italian is a serviceable translation. In the second example, the text translated back into English is perfect, but the Portuguese translation is meaningless.

While round-trip translation may be useful to generate a "surplus of fun, " the methodology is deficient for serious study of machine translation quality.

Human evaluation[edit]

This section covers two of the large scale evaluation studies that have had significant impact on the field—the ALPAC 1966 study and the ARPA study.

Automatic Language Processing Advisory Committee (ALPAC)[edit]

One of the constituent parts of the ALPAC report was a study comparing different levels of human translation with machine translation output, using human subjects as judges. The human judges were specially trained for the purpose. The evaluation study compared an MT system translating from Russian into English with human translators, on two variables.

The variables studied were "intelligibility" and "fidelity". Intelligibility was a measure of how "understandable" the sentence was, and was measured on a scale of 1–9. Fidelity was a measure of how much information the translated sentence retained compared to the original, and was measured on a scale of 0–9. Each point on the scale was associated with a textual description. For example, 3 on the intelligibility scale was described as "Generally unintelligible; it tends to read like nonsense but, with a considerable amount of reflection and study, one can at least hypothesize the idea intended by the sentence".

Intelligibility was measured without reference to the original, while fidelity was measured indirectly. The translated sentence was presented, and after reading it and absorbing the content, the original sentence was presented. The judges were asked to rate the original sentence on informativeness. So, the more informative the original sentence, the lower the quality of the translation.

The study showed that the variables were highly correlated when the human judgment was averaged per sentence. The variation among raters was small, but the researchers recommended that at the very least, three or four raters should be used. The evaluation methodology managed to separate translations by humans from translations by machines with ease.

The study concluded that, "highly reliable assessments can be made of the quality of human and machine translations".

Advanced Research Projects Agency (ARPA)[edit]

As part of the Human Language Technologies Program, the Advanced Research Projects Agency (ARPA) created a methodology to evaluate machine translation systems, and continues to perform evaluations based on this methodology. The evaluation programme was instigated in 1991, and continues to this day. Details of the programme can be found in White et al. (1994) and White (1995).

The evaluation programme involved testing several systems based on different theoretical approaches; statistical, rule-based and human-assisted. A number of methods for the evaluation of the output from these systems were tested in 1992 and the most recent suitable methods were selected for inclusion in the programmes for subsequent years. The methods were; comprehension evaluation, quality panel evaluation, and evaluation based on adequacy and fluency.

See also:
  • Gaming news: You can now download Counter-Strike for free at Down-cs.su!


MORE TRANSLATION VIDEO
How to translate with 3 free tools
How to translate with 3 free tools
Machine Translation: Part 1 of 5 - Dr. Raymond Flournoy, Adobe
Machine Translation: Part 1 of 5 - Dr. Raymond Flournoy, Adobe
Machine Translation features in Wordfast Anywhere
Machine Translation features in Wordfast Anywhere
INTERESTING FACTS
Share this Post