Diagnose the key issues in LRM-as-a-Judge for MT and calibrate the thoughts.
π’ News β’ π Introduction β’ π Quick Start
π§ Configuration β’ π Meta-Evaluation β’ β¨ Acknowledgements β’ π¬ Contact β’ π Citation
- [2025/10/24] π» ThinMQM paper is available on arXiv.
- [2025/09/19] π ThinMQM has been accepted to NeurIPS 2025!
Evaluating machine translation (MT) quality is a complex task that extends beyond simple string matching. Large Reasoning Models (LRMs) are capable of modeling intricate reasoning processes, yet their role in MT evaluation remains insufficiently understood. In this work, we present a systematic investigation into the use of LRMs as evaluators for MT quality, specifically exploring their ability to replicate the Multidimensional Quality Metrics (MQM) assessment process. Our analysis across various LRMs reveals that evaluation materials must be carefully tailored, as these models tend to overanalyze simple cases and exhibit overestimation biases. To address these challenges, we introduce a simple yet effective method for calibrating LRM reasoning by training them on synthetic, human-like MQM evaluation trajectories. Our experiments show that this approach not only reduces the thinking budget required by LRMs but also enhances evaluation performance across different model scales. These findings underscore the potential of efficiently calibrated LRMs to advance fine-grained, automatic MT evaluation.
# Clone the repository
git clone https://github.com/NLP2CT/ThinMQM.git
cd ThinMQM
# Install dependencies
pip install -r requirements.txt
# Install mt-metrics-eval evaluation package & Prepare benchmark data
git clone https://github.com/google-research/mt-metrics-eval.git
cd mt-metrics-eval
pip install .
mkdir $HOME/.mt-metrics-eval
cd $HOME/.mt-metrics-eval
wget https://storage.googleapis.com/mt-metrics-eval/mt-metrics-eval-v2.tgz
tar xfz mt-metrics-eval-v2.tgz# Step 1: Generate responses (using existing scripts)
# For ThinMQM model
bash scripts/run_thinmqm.sh
# For general-purpose LRMs using GEMBA prompt
bash scripts/run_gemba.sh
# Step 2: Extract answers and run meta-evaluation
bash scripts/run_metaeval.shPlease refer to the comments in the scripts to adjust for your environment. For hyperparameter options, see π§Configuration.
You can evaluate your own translation data with custom input files:
Example Data:
# Run the example script to see how it works
python example_custom_evaluation.pyExample CLI Usage:
MODEL_NAME_OR_PATH="/path/to/rzzhan/ThinMQM-32B" # Replace with your actual model path
# Set your data paths
SOURCE_FILE="cli_example_data/source.txt"
REFERENCE_FILE="cli_example_data/reference.txt"
SYSTEM_OUTPUTS_DIR="cli_example_data/system_outputs"
OUTPUT_DIR="cli_example_data/results"
SOURCE_LANG="English"
TARGET_LANG="Chinese"
TEMPLATE="thinking" # For ThinMQM: "thinking" (32B) or "thinking_ref" (7/8B)
# Run ThinMQM evaluation
python main.py custom_thinmqm \
--model_name="$MODEL_NAME_OR_PATH" \
--source_file="$SOURCE_FILE" \
--reference_file="$REFERENCE_FILE" \
--system_outputs="$SYSTEM_OUTPUTS_DIR" \
--output_dir="$OUTPUT_DIR" \
--source_lang="$SOURCE_LANG" \
--target_lang="$TARGET_LANG" \
--template="$TEMPLATE" \
--max_new_tokens=4096 \
--temperature=0.6βββ config/ # Configuration management
β βββ experiment_config.py
βββ evaluators/ # Specific evaluator implementations
β βββ base_evaluator.py # Core base classes
β βββ thinmqm_evaluator.py
β βββ gemba_evaluator.py
β βββ meta_evaluator.py
βββ utils/ # Utility functions
β βββ answer_extractor.py
β βββ template_utils.py
β βββ mqm_parser.py
β βββ process_results.py
βββ scripts/ # Shell scripts for easy execution
β βββ run_thinmqm.sh
β βββ run_gemba.sh
β βββ run_pipeline.sh
βββ main.py # Main entry point
βββ meta_eval_pipeline.md # Meta-evaluation entry point
| Released Models | HF Model | Template | Trained Dataset |
|---|---|---|---|
| rzzhan/ThinMQM-32B | https://huggingface.co/rzzhan/ThinMQM-32B | thinking |
https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_src |
| rzzhan/ThinMQM-8B | https://huggingface.co/rzzhan/ThinMQM-8B | thinking_ref |
https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_ref |
| rzzhan/ThinMQM-7B | https://huggingface.co/rzzhan/ThinMQM-7B | thinking_ref |
https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_ref |
Recommended decoding with
temperature=0.6, top_p=0.95.
- thinking: Source + translation evaluation
- thinking_ref: Source + reference + translation evaluation
- src: Source + translation evaluation
- ref: Reference + translation evaluation
- joint: Source + reference + translation evaluation
ThinMQM reduces thinking budgets while improving the evaluation performance of LRMs at different model scales.
We thank the open-source community for the excellent tools and libraries that made this work possible, including:
- vLLM for efficient LLM inference
- transformers for model/data loading, hosting
- mt-metrics-eval for meta-evaluation library and data
For questions, feedback, or collaboration opportunities, feel free to reach out:
- Runzhe Zhan: nlp2ct.runzhe@gmail.com
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
If you find our model, data, or evaluation code useful, please kindly cite our paper:
@article{zhan2025thinmqm,
title={Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost},
author={Zhan, Runzhe and Huang, Zhihong and Yang, Xinyi and Chao, Lidia S and Yang, Min and Wong, Derek F},
year={2025},
journal = {ArXiv preprint},
volume = {2510.20780},
url={https://arxiv.org/abs/2510.20780},
}
