Skip to content

mainlp/vqa-hud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

VQA-HUD

This is the repository for AAAI 2025 Paper: Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies Between Model Predictions and Human Responses in VQA


πŸ›  Installation & Usage

  1. Clone the repository:
    git clone https://github.com/mainlp/vqa-hud.git
    cd vqa-hud
  2. Prepare the dataset and base models: Download the dataset VQA 2.0 Follow the LXMERT and BEIT3, and fine-tune the provided checkpoints.

TODO

  • [β˜‘οΈ] script for HUD scores
  • [β˜‘οΈ] script for Evaluation
  1. Prepare the dataset and base models: Run:
 python HUD_score.py
 python split_hud.py --ascending
 to get the hud scores and set splits.

You can find all the evaluation functions in evaluation.py to implement any customized data evaluations.

πŸ“„ Citation

@article{Lan_Frassinelli_Plank_2025, title={Mind the Uncertainty in Human Disagreement: Evaluating Discrepancies Between Model Predictions and Human Responses in VQA}, volume={39}, url={https://ojs.aaai.org/index.php/AAAI/article/view/32468}, DOI={10.1609/aaai.v39i4.32468}, number={4}, journal={Proceedings of the AAAI Conference on Artificial Intelligence}, author={Lan, Jian and Frassinelli, Diego and Plank, Barbara}, year={2025}, month={Apr.}, pages={4446-4454} }

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages