Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras TimeDistributed Layer seems not supported #340

Open
nassimus26 opened this issue Oct 10, 2024 · 2 comments
Open

Keras TimeDistributed Layer seems not supported #340

nassimus26 opened this issue Oct 10, 2024 · 2 comments

Comments

@nassimus26
Copy link

nassimus26 commented Oct 10, 2024

Hi, I am using a TimeDistributed Layer in model which has an input shape of {10, 150, 180, 3} (10 frames of (w:150, h:180, colors : 3 ) ).

Since my device support a maximum of 4 dimensions, I was hoping that it should work, but after converting the Model from Keras to ONNX and then trying to convert to RKNN, I am getting this error :

E build:   File "/home/mac/miniconda3/envs/ai/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 462, in _create_inference_session
E build:     sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E build:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E build: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (sequential_21_1/time_distributed_20_1/transpose) Op (Transpose) [TypeInferenceError] Invalid attribute perm {1, 0, 2, 3, 4}, input shape = {10, 150, 180, 3}

Checking the ONNX model show indeed a first transpose layer with those attributs {1, 0, 2, 3, 4} it seems it comming from this Keras line code Keras TimeDistribued Layer :

        def time_distributed_transpose(data):
            """Swaps the timestep and batch dimensions of a tensor."""
            axes = [1, 0, *range(2, len(data.shape))]
            return ops.transpose(data, axes=axes)

So is there any hope that I could run a TimeDistribued Layer ( which by the way take an arrays of frames as DataSet and not images ) ?

Is it supported by the RKNN or not ? (The Keras TimeDistributed Layer which is converted using Transpose Layer in the ONNX)

Is there a workaround ?

@nassimus26
Copy link
Author

Having change the opset to 18 while exporting to ONNX fix the problem nut now another issue is reported :

--> Loading model
W __init__: rknn-toolkit2 version: 1.6.0+81f21f4d
W load_onnx: If you don't need to crop the model, don't set 'inputs'/'input_size_list'/'outputs'!
W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 18!
Loading : 100%|████████████████████████████████████████████████| 211/211 [00:00<00:00, 55730.36it/s]
W load_onnx: The config.mean_values is None, zeros will be set for input 0!
W load_onnx: The config.std_values is None, ones will be set for input 0!
done
--> Building model
W build: found outlier value, this may affect quantization accuracy
const name                                                                                                        abs_mean    abs_std     outlier value
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block4a_dwconv2/depthwise_weights_fused_bn      3.54        5.94        41.139      
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block4b_dwconv2/depthwise_weights_fused_bn      1.24        2.33        24.630      
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block4c_dwconv2/depthwise_weights_fused_bn      1.02        1.88        21.949      
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block5a_dwconv2/depthwise_weights_fused_bn      1.31        2.94        -31.833     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block5b_dwconv2/depthwise_weights_fused_bn      1.28        1.91        -20.992     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block5c_dwconv2/depthwise_weights_fused_bn      1.10        1.58        16.670      
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block5d_dwconv2/depthwise_weights_fused_bn      1.03        1.50        17.634      
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block5e_dwconv2/depthwise_weights_fused_bn      1.03        1.41        -20.423     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6a_dwconv2/depthwise_weights_fused_bn      1.38        1.12        -18.975     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6b_dwconv2/depthwise_weights_fused_bn      1.49        1.76        -29.987     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6c_dwconv2/depthwise_weights_fused_bn      1.55        1.77        -23.349     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6e_dwconv2/depthwise_weights_fused_bn      1.48        1.32        -17.528, 16.979
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6f_dwconv2/depthwise_weights_fused_bn      1.49        1.23        -24.013     
sequential_23/time_distributed_16/sequential_22/efficientnetv2-b0/block6g_dwconv2/depthwise_weights_fused_bn      1.54        1.11        -15.516     
GraphPreparing : 100%|███████████████████████████████████████████| 257/257 [00:00<00:00, 849.90it/s]
Quantizating :   0%|                                                        | 0/257 [00:00<?, ?it/s]E build: The channel of r_shape must be 3!
W build: ===================== WARN(6) =====================
E rknn-toolkit2 version: 1.6.0+81f21f4d
Quantizating :   0%|                                                        | 0/257 [00:00<?, ?it/s]
E build: Catch exception when building RKNN model!
E build: Traceback (most recent call last):
E build:   File "rknn/api/rknn_base.py", line 1996, in rknn.api.rknn_base.RKNNBase.build
E build:   File "rknn/api/rknn_base.py", line 148, in rknn.api.rknn_base.RKNNBase._quantize
E build:   File "rknn/api/quantizer.py", line 1289, in rknn.api.quantizer.Quantizer.run
E build:   File "rknn/api/quantizer.py", line 821, in rknn.api.quantizer.Quantizer._get_layer_range
E build:   File "rknn/api/rknn_utils.py", line 176, in rknn.api.rknn_utils.get_input_img
E build:   File "rknn/api/rknn_log.py", line 92, in rknn.api.rknn_log.RKNNLog.e
E build: ValueError: The channel of r_shape must be 3!

@nassimus26
Copy link
Author

actually it's a duplicate of the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant