Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1542
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Cancelled Job, 2 Unrelated FailuresAs of commit e56ed4b with merge base 98a5ac7 ( NEW FAILURE - The following job has failed:
CANCELLED JOB - The following job was cancelled. Please retry:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
zhenyan-zhang-meta
left a comment
There was a problem hiding this comment.
LGTM with comments. Can land after signal pass.
| # For llama::sdpa_with_kv_cache.out, preprocess ops | ||
| from executorch.extension.llm.custom_ops import custom_ops # no-qa |
There was a problem hiding this comment.
nit: what's the difference this change made?
There was a problem hiding this comment.
This was the original order prior to the other PR
We preserve the order due to how extension libraries interact (we want custom_ops to overwrite potential conflicts)
| @@ -911,6 +904,14 @@ def _gen_model_input( | |||
| return encoded, None | |||
|
|
|||
| # Llama 3.2 11B | |||
There was a problem hiding this comment.
Are we sure that everything below is for llama 3.2 11b? If so good to go, otherwise we need an if-else.
Will we in the future have more possibilities here in this func? If so, we list them with an if-else or switch-case chain.
There was a problem hiding this comment.
yup everything below is 11B.
If there's more models that require casing this function as a whole should get refactored at that time
Follow up patch from #1539 which was meant to make torchtune an optional import
Tested with:
Success only when torchtune installed
Success regardless of torchtune install