Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QNN EP] Passthrough EP Parameters in Node #23468

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

joncamp
Copy link
Contributor

@joncamp joncamp commented Jan 23, 2025

Description

The existing implementation of session options for the QNN EP does not honor the various bindings available. As such, even if set at runtime they are ignored. Fix is to follow the pattern of the webgpu provider and parse/populate the options accordingly.

Existing defaults are preserved, such that if options are not set the prior behavior will persist.

Motivation and Context

During debugging and development of Node implementations using the QNN EP the need to set various parameters became apparent. Currently the parameters can only be set via changes to the ORT dll code itself, which is inflexible and slows development.

@jywu-msft jywu-msft requested a review from fs-eire January 23, 2025 01:35
@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline

@HectorSVC
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed

@HectorSVC
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

@fs-eire
Copy link
Contributor

fs-eire commented Jan 24, 2025

Sorry for the late review.

I think we should better first define the JavaScript API for QNN here, then in this PR parse/read them accordingly.

@fs-eire
Copy link
Contributor

fs-eire commented Jan 24, 2025

Sorry for the late review.

I think we should better first define the JavaScript API for QNN here, then in this PR parse/read them accordingly.

I created #23486. Please help take a look

@HectorSVC HectorSVC added the ep:QNN issues related to QNN exeution provider label Jan 24, 2025
@jywu-msft
Copy link
Member

Sorry for the late review.
I think we should better first define the JavaScript API for QNN here, then in this PR parse/read them accordingly.

I created #23486. Please help take a look

thanks @fs-eire! Can #23486 be merged?

@fs-eire
Copy link
Contributor

fs-eire commented Jan 29, 2025

Sorry for the late review.
I think we should better first define the JavaScript API for QNN here, then in this PR parse/read them accordingly.

I created #23486. Please help take a look

thanks @fs-eire! Can #23486 be merged?

Please let me know if you are good with the change (naming and typing). Please approve the PR if you are OK with it and I will go merge it.

fs-eire added a commit that referenced this pull request Feb 4, 2025
@jywu-msft
Copy link
Member

/azp run Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed

@jywu-msft
Copy link
Member

/azp run ONNX Runtime Web CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

@jywu-msft
Copy link
Member

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 5 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

std::unordered_map<std::string, std::string> qnn_options;
qnn_options["backend_path"] = "QnnHtp.dll";
qnn_options["enable_htp_fp16_precision"] = "1";
// Ensure that the backend_path and enable_htp_fp16_precision options are set to default values if not provided.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fs-eire, @HectorSVC if I'm understanding correctly, after #23486
was added, there's a mismatch between camel case and snake case keys expected by QNN EP?
WebGPU uses camel case for both ?

sfatimar pushed a commit to intel/onnxruntime that referenced this pull request Feb 5, 2025
sfatimar pushed a commit to intel/onnxruntime that referenced this pull request Feb 5, 2025
@fs-eire
Copy link
Contributor

fs-eire commented Feb 9, 2025

I checked the implementation and I found that the existing implementation has a few errors.

Since the errors are not only for QNN ( there are also places to be fixed for CoreML and WebGPU), so I made this change. Please refer to 4ce51f3 as a patch to this PR.

ashrit-ms pushed a commit that referenced this pull request Feb 11, 2025
@jywu-msft
Copy link
Member

I checked the implementation and I found that the existing implementation has a few errors.

Since the errors are not only for QNN ( there are also places to be fixed for CoreML and WebGPU), so I made this change. Please refer to 4ce51f3 as a patch to this PR.

@joncamp would you be able to apply @fs-eire 's patch to this PR? let's try to merge it into main asap (for upcoming ORT 1.21 release, code freeze end of week)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:QNN issues related to QNN exeution provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants