-
Notifications
You must be signed in to change notification settings - Fork 29.5k
Totally rewrite how pipelines load preprocessors #38947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
all the tests are green on my first commit this is terrifying |
Tests seem to pass locally so cc @ArthurZucker @Cyrilvallez for core maintainer review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the improvement! This looks a lot more intuitive.
# Try to infer tokenizer from model or config name (if provided as str) | ||
if tokenizer is None: | ||
if isinstance(model_name, str): | ||
tokenizer = model_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm reading this correctly that using the string model name (path or HF name) will delegate to AutoTokenizer
below. If so, this seems much cleaner!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! The actual loading code is unchanged. What is changed is when we attempt it - rather than reading model mappings, we load a tokenizer iff:
_load_tokenizer is None (optional) or True (required)
- The user didn't already pass a
Tokenizer
object to thetokenizer
arg of the constructor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Rocketknight1! Sorry for the delay on this one! It looks great overall, but there seem to be some unecesary checks/dead code paths that we can simplify! I flagged them only for tokenizer and image processor, but it's the case for all 4 of them as it's the same logic!
if tokenizer is None and (load_tokenizer or load_tokenizer is None): | ||
try: | ||
# Try to infer tokenizer from model or config name (if provided as str) | ||
if tokenizer is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second check is redundant with the previous check, it is necesarily None here!
# Instantiate tokenizer if needed | ||
if isinstance(tokenizer, (str, tuple)): | ||
if isinstance(tokenizer, tuple): | ||
# For tuple we have (tokenizer name, {kwargs}) | ||
use_fast = tokenizer[1].pop("use_fast", use_fast) | ||
tokenizer_identifier = tokenizer[0] | ||
tokenizer_kwargs = tokenizer[1] | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can see, this can only ever be str
here
if image_processor is None and (load_image_processor or load_image_processor is None): | ||
try: | ||
# Try to infer image processor from model or config name (if provided as str) | ||
if image_processor is None: | ||
if isinstance(model_name, str): | ||
image_processor = model_name | ||
elif isinstance(config, str): | ||
image_processor = config | ||
# Backward compatibility, as `feature_extractor` used to be the name | ||
# for `ImageProcessor`. | ||
elif feature_extractor is not None and isinstance(feature_extractor, BaseImageProcessor): | ||
image_processor = feature_extractor | ||
else: | ||
# Impossible to guess what is the right image_processor here | ||
raise Exception( | ||
"Impossible to guess which image processor to use. " | ||
"Please provide a PreTrainedImageProcessor class or a path/identifier " | ||
"to a pretrained image processor." | ||
) | ||
|
||
# Instantiate image_processor if needed | ||
if isinstance(image_processor, (str, tuple)): | ||
image_processor = AutoImageProcessor.from_pretrained( | ||
image_processor, _from_pipeline=task, **hub_kwargs, **model_kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, as far as I can see the exact same logical branches are redundant/never reached here
@Cyrilvallez you were totally right! The tests should have just checked if |
Pipelines load preprocessors in a very hacky way, by checking various mappings. This creates a lot of weird side-effects, such as models needing to be added to tokenizer mapping lists or else they can't be used in pipelines.
This PR overhauls everything. Every pipeline has attributes
_load_processor
,_load_image_processor
,_load_feature_extractor
and_load_tokenizer
. These are set on the basePipeline
class and should be overridden by all subclasses. They have the following possible values:True
(this preprocessor must be loaded, always try to load it and throw an error if we fail)False
(this preprocessor is not used by the pipeline, don't try to load it)None
(this preprocessor is optional, try to load it but continue even if loading fails)This lets us delete a lot of mappings and simplify things!