Skip to content

rishiiyer01/EarlyFusionMultimodalLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 

Repository files navigation

EarlyFusionMultimodalLMs

Experimenting with Early Fusion Multimodal LMs

-We started with training a VQ-VAE for image tokenization, with the plan of finetuning early fusion into Llama3
-Due to compute requirements, we pivoted to stitching an already trained VQ-VAE from Chameleon to Llama3.1 Instruct via finetuning on the liuhaotian/LLaVA-CC3M-Pretrain-595K dataset
-Training code available at net/vllamatrain.py

The training script is currently complete and executable, the plan is to train on 4xH100. It's very likely that more data (and possibly compute) will be needed, since in traditional cross-attn style multimodality (like in llava), there is some text information within the CLIP embeddings. Do not hesistate to contact if you would like to contribute.

About

Experimenting with Early Fusion Multimodal LMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages