Skip to content

Add text_encoder_dtype and compile_text_encoder config parameters for Wan text encoders.#408

Open
ninatu wants to merge 1 commit into
mainfrom
ninatu/fix_text_encoder
Open

Add text_encoder_dtype and compile_text_encoder config parameters for Wan text encoders.#408
ninatu wants to merge 1 commit into
mainfrom
ninatu/fix_text_encoder

Conversation

@ninatu
Copy link
Copy Markdown
Collaborator

@ninatu ninatu commented May 15, 2026

Mitigates #397 by enabling the text encoder dtype to be configured separately from weights_dtype. Separating these parameters makes it possible to compile/load the text encoder in float32 (as it was before #397 by default) while keeping the model weights in bfloat16, and provides a configuration parameter to make the text encoder compilation optional (as it wasn’t compiled before #397).

This addresses problems since some environments experience issues when torch.compile is run on it in hermetic/packaged setups.

… for Wan text encoders.

Mitigates PR#397 by enabling the text encoder dtype to be configured separately from `weights_dtype`. Separating these parameters makes it possible to compile/load the text encoder in float32 (as it was before #397 by default) while keeping the model weights in bfloat16, and provides a configuration parameter to make the text encoder compilation optional  (as it wasn’t compiled before #397).

This addresses problems since some environments experience issues  when `torch.compile` is run on it in hermetic/packaged setups.
@ninatu ninatu requested a review from entrpn as a code owner May 15, 2026 15:16
@github-actions
Copy link
Copy Markdown

@ninatu ninatu requested a review from Perseus14 May 15, 2026 15:17
@ninatu ninatu changed the title Add text_encoder_dtype and compile_text_encoder config parameters… Add text_encoder_dtype and compile_text_encoder config parameters for Wan text encoders. May 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants