Daniel Deutsch

Daniel Deutsch

Daniel is a Research Scientist on the Google Translate Research team. His research interests include automatic and human evaluation of text generation.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Reliable human evaluation is critical to the development of successful natural language generation models, but achieving it is notoriously difficult. Stability is a crucial requirement when ranking systems by quality: consistent ranking of systems across repeated evaluations is not just desirable, but essential. Without it, there is no reliable foundation for hill-climbing or product launch decisions. In this paper, we use machine translation and its state-of-the-art human evaluation framework, MQM, as a case study to understand how to set up reliable human evaluations that yield stable conclusions. We investigate the optimal configurations for item allocation to raters, number of ratings per item, and score normalization. Our study on two language pairs provides concrete recommendations for designing replicable human evaluation studies. We also collect and release the largest publicly available dataset of multi-segment translations rated by multiple professional translators, consisting of nearly 140,000 segment annotations across two language pairs. View details
    Preview abstract Collecting high-quality translations is crucial for the development and evaluation of machine translation systems. However, traditional human-only approaches are costly and slow. This study presents a comprehensive investigation of 11 approaches for acquiring translation data, including human-only, machine-only, and hybrid approaches. Our findings demonstrate that human-machine collaboration can match or even exceed the quality of human-only translations, while being more cost-efficient. Error analysis reveals the complementary strengths between human and machine contributions, highlighting the effectiveness of collaborative methods. Cost analysis further demonstrates the economic benefits of human-machine collaboration methods, with some approaches achieving top-tier quality at around 60% of the cost of traditional methods. We release a publicly available dataset containing nearly 18,000 segments of varying translation quality with corresponding human ratings to facilitate future research. View details
    Mitigating metric bias in minimum bayes risk decoding
    Proceedings of the Ninth Conference on Machine Translation (2024), pp. 1063-1094
    Preview abstract Minimum bayes risk decoding has been shown to improve translation quality both on automated metrics and human evaluations. In this paper we show that MBR decoding tends to show larger improvements in the utility metric and similar metrics, compared to other unrelated metrics. To mitigate this metric bias issue, we explore using MBR decoding using ensembles of multiple metrics as the utility function, as well as QE filtering followed by MBR decoding. Human evaluations show that using an ensemble of metrics improves quality over MBR or QE decoding with a single metric. View details
    Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
    Nitika Mathur
    Chi-kiu Lo
    Eleftherios Avramidis
    Ricardo Rei
    Brian Thompson
    Frédéric Blain
    Tom Kocmi
    Jiayi Wang
    David Adelani
    Marianna Buchicchio
    Chrysoula Zerva
    Alon Lavie
    2024
    Preview abstract The WMT24 Metrics Shared Task evaluated the performance of automatic metrics for machine translation (MT), with a major focus on LLM-based translations that were generated as part of the WMT24 General MT Translation Task. As LLMs become increasingly popular in MT, it's crucial to determine if existing evaluation metrics can accurately assess the output of these systems. To provide a robust benchmark for this evaluation, human assessments were collected using Multidimensional Quality Metrics (MQM), continuing the practice from the previous year. Furthermore, building on the success of the previous year, a challenge set subtask was included, requiring participants to design contrastive test suites that specifically target a metric's ability to identify and penalize different types of translation errors. Finally, the meta-evaluation procedure was refined to better reflect real-world usage of MT metrics, focusing on pairwise accuracy at both the system and segment levels. We present an extensive analysis on how well metrics perform on three language pairs: English to Spanish (Latin America), Japanese to Chinese, and English to German. The results strongly confirm the results reported last year, that neural fine-tuned metrics continue to stay strong also for LLM-based translation systems. View details
    There's no Data Like Better Data: Using QE Metrics for MT Data Filtering
    Jan-Thorsten Peter
    Mara Finkelstein
    Jurik Juraska
    Proceedings of the Eighth Conference on Machine Translation, Association for Computational Linguistics, Singapore (2023), pp. 561-577
    Preview abstract Quality Estimation (QE), the evaluation of machine translation output without the need of explicit references, has seen big improvements in the last years with the use of neural metrics. In this paper we analyze the viability of using QE metrics for filtering out bad quality sentence pairs in the training data of neural machine translation systems (NMT). While most corpus filtering methods are focused on detecting noisy examples in collections of texts, usually huge amounts of web crawled data, QE models are trained to discriminate more fine-grained quality differences. We show that by selecting the highest quality sentence pairs in the training data, we can improve translation quality while reducing the training size by half. We also provide a detailed analysis of the filtering results, which highlights the differences between both approaches. View details
    Preview abstract Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on direct estimation of quality scores, the resulting metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we fill this gap by proposing \textbf{\textsc{AutoMQM}}, a prompting technique which leverages the \textit{reasoning} and \textit{in-context learning} capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple \textit{score prediction} prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate \textsc{AutoMQM} with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations. View details
    Ties Matter: Meta-Evaluating Modern Metrics with Pairwise Accuracy and Tie Calibration
    George Foster
    Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Singapore, pp. 12914-12929
    Preview abstract Kendall's tau is frequently used to meta-evaluate how well machine translation (MT) evaluation metrics score individual translations. Its focus on pairwise score comparisons is intuitive but raises the question of how ties should be handled, a gray area that has motivated different variants in the literature. We demonstrate that, in settings like modern MT meta-evaluation, existing variants have weaknesses arising from their handling of ties, and in some situations can even be gamed. We propose instead to meta-evaluate metrics with a version of pairwise accuracy that gives metrics credit for correctly predicting ties, in combination with a tie calibration procedure that automatically introduces ties into metric scores, enabling fair comparison between metrics that do and do not predict ties. We argue and provide experimental evidence that these modifications lead to fairer ranking-based assessments of metric performance. View details
    Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph-Level
    Jurik Juraska
    Mara Finkelstein
    Proceedings of the Eighth Conference on Machine Translation, Association for Computational Linguistics, Singapore (2023), pp. 996-1013
    Preview abstract As research on machine translation moves to translating text beyond the sentence level, it remains unclear how effective automatic evaluation metrics are at scoring longer translations. In this work, we first propose a method for creating paragraph-level data for training and meta-evaluating metrics from existing sentence-level data. Then, we use these new datasets to benchmark existing sentence-level metrics as well as train learned metrics at the paragraph level. Interestingly, our experimental results demonstrate that using sentence-level metrics to score entire paragraphs is equally as effective as using a metric designed to work at the paragraph level. We speculate this result can be attributed to properties of the task of reference-based evaluation as well as limitations of our datasets with respect to capturing all types of phenomena that occur in paragraph-level translations. View details
    WMT23 Metrics shared task Submission: Quality Estimation using Minimum Bayes Risk
    Subhajit Naskar
    Proceedings of the Eighth Conference on Machine Translation, Association for Computational Linguistics, Singapore (2023), pp. 806-811
    Preview abstract This report describes the Minimum Bayes Risk Quality Estimation (MBR-QE) submission to the Workshop on Machine Translation's 2023 Metrics Shared Task. MBR decoding with neural utility metrics (BLEURT) are known to be very effective in generating high quality machine translations. We use the underlying assumption of MBR decoding and develop a MBR based reference-free quality estimation metric. Our method uses a evaluator machine translation system and a reference-based utility metric (BLEURT, MeticX) to calculate a quality estimation score of a model. We report results related to comparing different MBR configuration and utility metrics. View details
    MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task
    Jurik Juraska
    Mara Finkelstein
    Mahdi Mirzazadeh
    Conference on Machine Translation (2023)
    Preview abstract This report details the MetricX-23 submission to the Workshop on Machine Translation's 2023 Metrics Shared Task and provides an overview of the experiments that informed which metrics were submitted. Our three submissions---each with a quality estimation (or reference-free) version---are all learned regression-based metrics that vary in the data used for training and which pretrained language model was used for initialization. We report results related to understanding (1) which supervised training data to use, (2) the impact of how the training labels are normalized, (3) the amount of synthetic training data to use, (4) how metric performance is related to model size, and (5) the effect of initializing the metrics with different pretrained language models. The training recipes that we found to be most successful are detailed in this report. View details