Skip to content

Any plan to support batch inference for local run? #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
RyanHuangNLP opened this issue Dec 28, 2023 · 3 comments
Open

Any plan to support batch inference for local run? #1

RyanHuangNLP opened this issue Dec 28, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@RyanHuangNLP
Copy link

No description provided.

@praeclarumjj3
Copy link
Member

Hi @RyanHuangNLP, thanks for your interest. We haven't explicitly planned on doing that. I may push a script if I find the time in the near future.

Although, I don't think it should be too hard for you to write your own batch inference script. You could take hints from the model_seg_loader.py script if you plan on doing that. Let me know if you have any other questions.

@praeclarumjj3 praeclarumjj3 added the enhancement New feature or request label Jan 1, 2024
@RyanHuangNLP
Copy link
Author

RyanHuangNLP commented Jan 2, 2024

@praeclarumjj3 thanks, I will take a try

@dg845
Copy link

dg845 commented Jan 10, 2024

I have opened a PR to add a batched inference script: #3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants