Skip to content

Conversation

@Cyrilvallez
Copy link
Member

@Cyrilvallez Cyrilvallez commented Mar 14, 2025

What does this PR do?

As per the title. It's easier to use a regex from the beginning compared to start with a list, then switch to regex near the end of the loading logic.
Also, make sure that only full layer names can be matched with the regex. Otherwise some old and bad layer names such as wo in T5 could be match against layers such as word_embedding, which contains wo.

Also, use the flag only when loading in fp16 or using a quantizer expecting it, as _keep_in_fp32_modules was introduced only to avoid issues when casting bf16 -> fp16. See #20287 for details. That is, the layers should not always be kept in fp32, despite the name of the flag... I added a detailed comment about that because it is not clear at all otherwise

@github-actions github-actions bot marked this pull request as draft March 14, 2025 13:13
@github-actions
Copy link
Contributor

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. When it is ready for review, please click the Ready for review button (at the bottom of the PR page).

@Cyrilvallez Cyrilvallez marked this pull request as ready for review March 14, 2025 13:14
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! One thing that is missing: a check to enforce at init time, like the TP plan, that the layers actually exist in the model?
I think for Blip2 and T5 there might be weirdness, but at least show a warning !

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also some documentation is missing + maybe deprecate nion re pattern no?

@Cyrilvallez
Copy link
Member Author

I just modified blip2 so that no model in the library can have the flag set to a non-existing layer (😮‍💨), so I don't think we actually need a check at init time - IMO it would clutter the code for something that would never happen. It can add it to be super super safe though!

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, IMO from_pretrained should be checking the layer names to make sure typos are avoided for example

@Cyrilvallez Cyrilvallez merged commit dd3933d into main Mar 21, 2025
24 checks passed
@Cyrilvallez Cyrilvallez deleted the fix-keep-fp32 branch March 21, 2025 15:13
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
* better regex everywhere

* fix

* Update test_modeling_instructblip.py

* BC with explanations this time otherwise it makes no sense at all

* Update test_modeling_instructblip.py

* style

* CIs

* update _keep_in_fp32_modules in blip2

* Update modeling_utils.py

* Update modeling_utils.py

* style

* CIs

* add check

* trigger CIs

* Update modeling_utils.py

* trigger CIs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants