-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QNN EP] Passthrough EP Parameters in Node #23468
base: main
Are you sure you want to change the base?
Conversation
Update forked repository to latest
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline |
/azp run Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed |
/azp run ONNX Runtime Web CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline |
Azure Pipelines successfully started running 5 pipeline(s). |
Azure Pipelines successfully started running 9 pipeline(s). |
Azure Pipelines successfully started running 5 pipeline(s). |
Sorry for the late review. I think we should better first define the JavaScript API for QNN here, then in this PR parse/read them accordingly. |
…ault elsewhere. No need to set it here.
Please let me know if you are good with the change (naming and typing). Please approve the PR if you are OK with it and I will go merge it. |
### Description As a pre-requisite of #23468
/azp run Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed |
/azp run ONNX Runtime Web CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline |
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline |
Azure Pipelines successfully started running 5 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 5 pipeline(s). |
Azure Pipelines successfully started running 9 pipeline(s). |
std::unordered_map<std::string, std::string> qnn_options; | ||
qnn_options["backend_path"] = "QnnHtp.dll"; | ||
qnn_options["enable_htp_fp16_precision"] = "1"; | ||
// Ensure that the backend_path and enable_htp_fp16_precision options are set to default values if not provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fs-eire, @HectorSVC if I'm understanding correctly, after #23486
was added, there's a mismatch between camel case and snake case keys expected by QNN EP?
WebGPU uses camel case for both ?
### Description As a pre-requisite of microsoft#23468
### Description As a pre-requisite of microsoft#23468
I checked the implementation and I found that the existing implementation has a few errors. Since the errors are not only for QNN ( there are also places to be fixed for CoreML and WebGPU), so I made this change. Please refer to 4ce51f3 as a patch to this PR. |
### Description As a pre-requisite of #23468
@joncamp would you be able to apply @fs-eire 's patch to this PR? let's try to merge it into main asap (for upcoming ORT 1.21 release, code freeze end of week) |
Description
The existing implementation of session options for the QNN EP does not honor the various bindings available. As such, even if set at runtime they are ignored. Fix is to follow the pattern of the
webgpu
provider and parse/populate the options accordingly.Existing defaults are preserved, such that if options are not set the prior behavior will persist.
Motivation and Context
During debugging and development of Node implementations using the QNN EP the need to set various parameters became apparent. Currently the parameters can only be set via changes to the ORT dll code itself, which is inflexible and slows development.