Hyper-Bagel:

A Unified Acceleration Framework for Multimodal Understanding and Generation

Yanzuo Lu*,
Xin Xia*,
Manlin Zhang*,
Huafeng Kuang,
Jianbin Zheng,
Yuxi Ren,
Xuefeng Xiao†
ByteDance Seed
*  Equal contribution   Project Lead

Image generation samples produced by our 6-NFE accelerated BAGEL model.

Image editing samples produced by our 6-NFE accelerated BAGEL model.

Abstract


Unified multimodal models have recently attracted considerable attention for their remarkable abilities in jointly understanding and generating diverse content. However, as contexts integrate increasingly numerous interleaved multimodal tokens, the iterative processes of diffusion denoising and autoregressive decoding impose significant computational overhead. To address this, we propose Hyper-Bagel , a unified acceleration framework designed to simultaneously speed up both multimodal understanding and generation tasks. Our approach uses a divide-and-conquer strategy, employing speculative decoding for next-token prediction and a multi-stage distillation process for diffusion denoising. The framework delivers substantial performance gains, achieving over a 2x speedup in multimodal understanding. For generative tasks, our resulting lossless 6-NFE model yields a 16.67x speedup in text-to-image generation and a 22x speedup in image editing, all while preserving the high-quality output of the original model. We further develop a highly efficient 1-NFE model that enables near real-time interactive editing and generation. By combining advanced adversarial distillation with human feedback learning, this model achieves ultimate cost-effectiveness and responsiveness, making complex multimodal interactions seamless and instantaneous.

Speculative Decoding Pipeline


Training pipeline for our proposed speculative decoding approach in Hyper-Bagel.

Distillation Pipeline


Training pipeline for our proposed Distribution Matching Distillation via ODE (DMDO)

Experiment

Quantitative results on GEdit-Bench. The baseline is 132-NFE because it uses a CFG interval of [0.4, 1.0], where text and image CFG are simultaneously enabled only at timesteps within this interval. * Results are cited from those reported in the BAGEL paper. † We reproduce these results under the same training environment.

Quantitative results on GenEval. * Results are cited from those reported in the BAGEL paper. † We reproduce these results under the same training environment.

Qualitative comparison of different accelerated models against the baseline on image generation.

Qualitative comparison of different accelerated models against the baseline on image editing.

BibTeX

@misc{lu2025hyperbagel,
      title={Hyper-Bagel: A Unified Acceleration Framework for Multimodal Understanding and Generation}, 
      author={Yanzuo Lu and Xin Xia and Manlin Zhang and Huafeng Kuang and Jianbin Zheng and Yuxi Ren and Xuefeng Xiao},
      year={2025},
      eprint={2509.18824},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}