TokenFlow

官方 Pytorch 实现 “TokenFlow:一致的扩散特征,实现一致的视频编辑”,呈现 “TokenFlow”(ICLR 2024)。「Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR 2024)」

Github星跟踪图

TokenFlow: Consistent Diffusion Features for Consistent Video Editing (ICLR 2024)

[Project Page]

arXiv Hugging Face Spaces
Pytorch

https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb

TokenFlow is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning.

The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos.

For more see the project webpage.

Sample results

Environment

conda create -n tokenflow python=3.9
conda activate tokenflow
pip install -r requirements.txt

Preprocess

Preprocess you video by running using the following command:

python preprocess.py --data_path <data/myvideo.mp4> \
                     --inversion_prompt <'' or a string describing the video content>

Additional arguments:

                     --save_dir <latents>
                     --H <video height>
                     --W <video width>
                     --sd_version <Stable-Diffusion version>
                     --steps <number of inversion steps>
                     --save_steps <number of sampling steps that will be used later for editing>
                     --n_frames <number of frames>
                     

more information on the arguments can be found here.

Note:

The video reconstruction will be saved as inverted.mp4. A good reconstruction is required for successfull editing with our method.

Editing

  • TokenFlow is designed for structure-preserving edits.
  • Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is important to ensure that the edit works with the chosen base technique.
  • The LDM decoder may introduce some jitterness, depending on the original video.

To edit your video, first create a yaml config as in configs/config_pnp.yaml.
Then run

python run_tokenflow_pnp.py

Similarly, if you want to use ControlNet or SDEedit, create a yaml config as in config/config_controlnet.yaml or configs/config_SDEdit.yaml and run python run_tokenflow_controlnet.py or python run_tokenflow_SDEdit.py respectivly.

Citation

@article{tokenflow2023,
        title = {TokenFlow: Consistent Diffusion Features for Consistent Video Editing},
        author = {Geyer, Michal and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
        journal={arXiv preprint arxiv:2307.10373},
        year={2023}
        }

主要指标

概览
名称与所有者omerbt/TokenFlow
主编程语言Python
编程语言 (语言数: 1)
平台
许可证MIT License
所有者活动
创建于2023-07-20 16:40:00
推送于2025-02-03 15:34:18
最后一次提交2025-02-03 16:34:18
发布数0
用户参与
星数1.7k
关注者数73
派生数139
提交数15
已启用问题?
问题数45
打开的问题数36
拉请求数1
打开的拉请求数5
关闭的拉请求数0
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?