python generative_recommenders/trainers/rqvae_trainer.py config/rqvae/p5_amazon.gin
The code is based on the RQ-VAE-Recommender. And following the method proposed in Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation, we augment the quantize module with a uniform semantic mapping variant.
python generative_recommenders/trainers/tiger_trainer.py config/tiger/p5_amazon.gin
The codebase largely follows the original RQ-VAE-Recommender implementation, but we refactored some code and do some upgrade.
Current benchmark:
| Dataset | Metric | Result |
|---|---|---|
| P5 Amazon-Beauty | Recall@10 | 0.42 |
We provide early implementations for the following large language model recommenders:
- LCRec: Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation
- NoteLLM: A Retrievable Large Language Model for Note Recommendation
- COBRA: Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
The training scripts for these models are still being prepared, so they are not ready to run yet.
- Add More Model: HSTU, OneRec, etc.
- Test More Dataset: Test on more datasets.
RQ-VAE-Recommender by Edoardo Botta.