Skip to content

ApexGen-X/MergeVQ

Repository files navigation

Siyuan Li1,3*, Luyuan Zhang2*, Zedong Wang4, Juanxi Tian3, Cheng Tan1,3, Zicheng Liu1,3, Chang Yu3, Qingsong Xie5†, Haoqian Wang2, Zhen Lei6,7,8†

1 Zhejiang University   2 Tsinghua University   3 Westlake University   4 HKUST   5 OPPO AI Center   6 CAIR, HKISI-CAS   7 MAIS CASIA   8 University of Chinese Academy of Sciences

* Equal Contributions; Corresponding Authors.

mergevq_framework

Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based autoregressive generative models to bridge the gap between visual generation and representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with a token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both representation learning and image generation tasks while maintaining favorable token efficiency and inference speed.

🤗 HuggingFace Daily Papers Top-1: https://huggingface.co/papers/2504.00999

Catalog

We plan to release implementations of MergeVQ in a few months (before CVPR2025 taking place). Please watch us for the latest release and welcome to open issues for discussion! Currently, we have released the basic implementations of MergeVQ tokenizers.

📖 Implementations

🛠️ Installation

GPU

  • Environments: We have tested on Python3.10.0 + torch2.1.0+cuda12.1, and Python 3.8.8 + torch==1.3.0+cuda11.8, and other versions may also work.
  • Dependencies: pip install -r requirements.txt Here is an example of installing with torch2.1.0+cuda12.1 from scratch:
conda create -n mergevq python=3.10.0
conda activate mergevq
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt

NPU

  • Env: Python 3.9.16 and CANN 8.0.T13
  • Main Dependencies: torch=2.1.0+cpu + torch-npu=2.1.0.post3-20240523 + Lightning
  • Other Dependencies: see in requirements.txt

Datasets Preparation

We use ILSVRC2012 ImageNet with training set and validation set at the root, which could be downloaded as untared as follows:

.cache/imagenet
└── train/
    ├── n01440764
        ├── n01440764_10026.JPEG
        ├── n01440764_10027.JPEG
        ├── ...
    ├── n01443537
    ├── ...
└── val/
    ├── n01440764
    ├── n01443537
    ├── ...

When start training or evaluation, these files will be generated under .cache/imagenet/train and .cache/imagenet/val, including filelist.txt, imagenet_idx_to_synset.yaml, synset_human.txt, and validation_synset.txt. If you want to use a custom dataset or ImageNet at the other file path, please specify cachedir for taming.data.imagenet.ImageNetTrain in the training config file.

Pre-training Models

If you are not available to access https://huggingface.co/ smoothly, we have two solutions.

  • Export to the mirror website (https://hf-mirror.com) and start training directly:
export HF_ENDPOINT=https://hf-mirror.com

Manually download the following pre-trained models from the offical or mirror websites and copy them to the cache folder as follows, or modify the config file with the path of local huggingface models.

/root/.cache/huggingface/hub
└── models--facebook--dinov2-base
└── models--laion--CLIP-ViT-B-16-laion2B-s34B-b88K
└── models--timm--vit_base_patch14_dinov2.lvd142m
from timm import create_model
teacher_weights = create_model("vit_base_patch14_dinov2.lvd142m", pretrained=True).state_dict()
teacher_weights = create_model("vit_base_patch16_clip_224.laion2b", pretrained=True).state_dict()
from transformers import AutoModel
dist_model = AutoModel.from_pretrained("facebook/dinov2-base")

Stage I: Training of Visual Tokenizer

🚀 Training Scripts

  • $256\times 256$ MergeVQ-d64 (G+R) Tokenizer Training with multiple nodes:
bash scripts/train_tokenizer/run_256_GR_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK

Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:

bash scripts/train_tokenizer/run_256_GR_d64_single.sh
  • $256\times 256$ MergeVQ-d96 (G+R) Tokenizer Training with multiple nodes:
bash scripts/train_tokenizer/run_256_GR_d96_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK

Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:

bash scripts/train_tokenizer/run_256_GR_d96_single.sh
  • $256\times 256$ MergeVQ-d64 (G) Tokenizer Training with multiple nodes:
bash scripts/train_tokenizer/run_256_G_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANK

Or you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 8 and 4 times gradient accumulations as an example:

bash scripts/train_tokenizer/run_256_G_d64_single.sh

Evaluation Scripts

We gather evaluation scripts of experiments above into one bash file, which can be executed with modified path to config files, results, and checkpoints:

bash scripts/evaluation/evaluation_mergevq.sh

Note of Errors

If the some errors occur during training, you may solve them with the following steps:

  • The version of timm. The low version of timm like 0.6.13 will cause errors in building Transformer Blocks, which can be solved by pip install timm==0.9.11.
  • Errors in building up ImageNet dataset. Although the meta files of ImageNet will be generated automatically, you might copy our preprocess meta files manually if it cannot be generated.

🍺 Performance and Models (Updating)

Tokenizer

Method Type #Tokens Train Size Epoch Codebook Size rFID (Full) rFID (Merge) Checkpoint
Open-MAGVIT2 2D $16^2$ $256^2$ 270 2^18 1.53 (256) - ckpt
MergeVQ-d32 (G) 1D [256, 1024] $256^2$ 200 2^18 0.48 (1024) 0.80 (256) TODO
MergeVQ-d64 (G) 1D [256, 1024] $256^2$ 100 2^18 0.49 (1024) 0.91 (256) TODO
MergeVQ-d64 (G) 1D [256, 1024] $256^2$ 200 2^18 0.43 (1024) 0.83 (256) TODO
MergeVQ-d32 (G+R) 1D [144, 256] $256^2$ 270 2^18 1.27 (256) 1.74 (144) TODO
MergeVQ-d64 (G+R) 1D [144, 256] $256^2$ 270 2^18 1.12 (256) 1.48 (144) TODO
MergeVQ-d96 (G+R) 1D [144, 256] $256^2$ 200 2^18 1.03 (256) 1.33 (144) TODO

Stage II: Training of Auto-Regressive Models

🚀 Training Scripts

Please see in scripts/train_autogressive/run.sh for different model configurations.

bash scripts/train_autogressive/run.sh MASTER_ADDR MASTER_PORT NODE_RANK

🚀 Sample Scripts

Please see in scripts/train_autogressive/run.sh for different sampling hyper-parameters for different scale of models.

bash scripts/evaluation/sample_npu.sh or scripts/evaluation/sample_gpu.sh Your_Total_Rank

License

This project is released under the Apache 2.0 license.

Acknowledgement

Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.

  • VQGAN: Taming Transformers for High-Resolution Image Synthesis.
  • ToMe: Token Merging: Your ViT but Faster.
  • LlamaGen: Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation.
  • SEED-Voken (OpenMAGVIT2): SEED-Voken: A Series of Powerful Visual Tokenizers.
  • pytorch-image-models: PyTorch image models, scripts, pretrained weights.

Citation

If you find this repository helpful, please consider citing:

@inproceedings{cvpr2025mergevq,
    title={MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization},
    author={Li, Siyuan and Zhang, Luyuan and Wang, Zedong and Tian, Juanxi and Tan, Cheng and Liu, Zicheng and Yu, Chang and Xie, Qingsong and Lu, Haonan and Wang, Haoqian and Lei, Zhen},
    booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2025}
}

(back to top)