Skip to content

Commit 85cec45

Browse files
committed
Address comments
Signed-off-by: Kunjan Patel <kunjanp@google.com>
1 parent 463baaf commit 85cec45

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/maxdiffusion/configs/base_wan_14b.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ attention: 'flash' # Supported attention: dot_product, flash, cudnn_flash_te, ri
6363
flash_min_seq_length: 0
6464

6565
# If mask_padding_tokens is True, we pass in segment ids to splash attention to avoid attending to padding tokens.
66-
# Else we do not pass in segment ids and on vpu bound hardware like (ironwood) this is faster.
66+
# Else we do not pass in segment ids and on vpu bound hardware like trillium this is faster.
6767
# However, when padding tokens are significant, this will lead to worse quality and should be set to True.
6868
mask_padding_tokens: True
6969
# Maxdiffusion has 2 types of attention sharding strategies:

0 commit comments

Comments
 (0)