We conduct a preliminary evaluation of ChatGPT/GPT-4 for machine translation. [V1] [arXiv]
This repository shows the main findings and releases the evaluated test sets as well as the translation outputs, for the replication of the study.
Please kindly cite the papers of the data sources if you use any of them.
- Flores: The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
- WMT19 Biomedical: Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for Medline Abstracts and Biomedical Terminologies
- WMT20 Robustness: Findings of the WMT 2020 Shared Task on Machine Translation Robustness
We ask ChatGPT for advice to trigger the translation ability:
Summarized prompts:
- Tp1:
Translate these sentences from [SRC] to [TGT]:
- Tp2:
Answer with no quotes. What do these sentences mean in [TGT]?
- Tp3:
Please provide the [TGT] translation for these sentences:
✅
Table 1: Comparison of different prompts for ChatGPT to perform Chinese-to-English (Zh⇒En) translation.
We evaluate the translations between four languages, namely, German, English, Romanian and Chinese, considering both the resource and language family effects.
- ChatGPT performs competitively with commercial translation products (e.g., Google Translate) on high-resource European languages but lags behind significantly on low-resource.
- The gap between ChatGPT and the commercial systems becomes larger on distant languages than close languages.
We evaluate the translation robustness of ChatGPT on biomedical abstracts, reddit comments, and crowdsourced speeches.
- ChatGPT does not perform as well as the commercial systems on biomedical abstracts or Reddit comments but exhibits good results on spoken language.
For distant languages, we explore an interesting strategy named Pivot Prompting that asks ChatGPT to translate the source sentence into a high-resource pivot language before into the target language. Thus, we adjust the Tp3 prompt as below:
- Tp3-pivot:
Please provide the [PIV] translation first and then the [TGT] translation for these sentences one by one:
Table 4: Performance of ChatGPT with pivot prompting. New results are obtained from the updated ChatGPT version on 2023.01.31. LR: length ratio.
We update the translation performance of GPT-4, which exhibits huge improvements over ChatGPT. Refer to [ParroT] for the COMET metric results.
We analyze the translation outputs with compare-mt
at both word level and sentence level.
- ChatGPT performs the worst on low-frequency words, which is then fixed by GPT-4.
- ChatGPT performs the worst on short sentences, which we attribute to the observations that ChatGPT translates famous terminologies into full names rather than abbreviations in references.
Table 6-7: Automatic analysis: (a) F-measure of target word prediction w.r.t. frequency. (b) BLEU score w.r.t. length bucket of target sentences.
We ask three annotators to identify the errors in the translation outputs, including under-translation, over-translation, and mis-translation. Based on the translation errors, the annotators rank the translation outputs of Google, ChatGPT and GPT-4 accordingly, with 1 as the best system and 3 as the worst.
- ChatGPT makes more over-translation errors and mis-translation errors than Google Translate, tending to generate hallucinations.
- GPT-4 makes the least errors and is ranked 1st though its BLEU score is lower than that of Google Translate.
Table 8-9: Human analysis: (a) Number of translation errors annotated by human. (b) Human rankings of the translation outputs.
A few translation outputs:
- ChatGPT hallucinates at the first few tokens and also mis-translates "过量降水".
- Both ChatGPT and GPT-4 translate "广泛耐药结核病" into the full name while the reference and Google Translate do not.
- GPT-4 can translate the terminology "美国公共广播公司" into the abbreviation as the reference.
- GPT-4 translates the terminology "狼孩" more properly based on the context while Google Translate and ChatGPT cannot.
We should admit that the report is far from complete with various aspects to make it more reliable in the future:
- Coverage of Test Data: Currently, we randomly select 50 samples from each test set for evaluation due to the response delay of ChatGPT. While there are some projects in GitHub trying to automate the access process, they are vulnerable to browser refreshes or network issues. The official API by OpenAI in the future may be a better choice. Let’s just wait for a moment.
- Reproducibility Issue: By querying ChatGPT multiple times, we find that the results of the same query may vary across multiple trials, which brings randomness to the evaluation results. For more reliable results, it is best to repeat the translation multiple times for each test set and report the average result.
- Translation Abilities: We only focus on multilingual translation and translation robustness in this report. However, there are some other translation abilities that can be further evaluated, e.g., constrained machine translation and document-level machine translation.
- Slator report: Tencent Pits ChatGPT Translation Quality Against DeepL and Google Translate
- Twitter discussions: AK, Aran Komatsuzaki , Haruhiko Okumura , Daun
Please kindly cite our report if you find it helpful:
@inproceedings{jiao2023ischatgpt,
title={Is ChatGPT A Good Translator? A Preliminary Study},
author={Wenxiang Jiao and Wenxuan Wang and Jen-tse Huang and Xing Wang and Shuming Shi and Zhaopeng Tu},
booktitle = {ArXiv},
year = {2023}
}