Skip to content

[LoRA] Add LoRA support for InternVL #18842

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 29, 2025

Conversation

jeejeelee
Copy link
Collaborator

FIX #18820

Signed-off-by: Jee Jee Li <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we can have simple tests that apply randomly initialized LoRA adapter just to check that the LoRA adapters can be loaded into the model? (Can do this in another PR)

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) May 29, 2025 02:32
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label May 29, 2025
@jeejeelee
Copy link
Collaborator Author

I wonder whether we can have simple tests that apply randomly initialized LoRA adapter just to check that the LoRA adapters can be loaded into the model? (Can do this in another PR)

We only need to set enable_lora when initializing the LLM to achieve this testing purpose, but considering the CI pressure, I'm not sure if it's necessary.

@DarkLight1337
Copy link
Member

We can add a test similar to test_can_initialize and skip the test by default. The test can be enabled manually and run locally.

@jeejeelee
Copy link
Collaborator Author

We can add a test similar to test_can_initialize and skip the test by default. The test can be enabled manually and run locally.

Sound good, I will try adding this in the following PR

Comment on lines +1022 to +1029
packed_modules_mapping = {
"wqkv": ["wqkv"],
"qkv": ["qkv"],
"gate_up_proj": [
"w1",
"w3",
],
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember these names are used by InternLM2. Dose this work for InternVL models with other backbones besides InternLM2 (like Qwen2.5)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably isn't supported

@DarkLight1337 DarkLight1337 merged commit 34d6c44 into vllm-project:main May 29, 2025
76 of 77 checks passed
@jeejeelee jeejeelee deleted the internvl-support-lora branch May 29, 2025 10:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Doc]: The description about InternVL's support for LoRA in the document does not conform to the reality
3 participants