Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported operation _FusedBatchNormV3 Failed to parse UFF when converting frozen to plan on Jetson Nano #43

Open
flycat0101 opened this issue Aug 13, 2019 · 5 comments

Comments

@flycat0101
Copy link

ENV:
Jetson Nano board with JetPack4.2.1
cuda 10.0
cuDNN 7.5
TensorFlow with GPU 1.13.1

Following the README,

  1. clone git repository and checkout to trt_4plus, then build the uff_to_plan
  2. run "source scripts/download_models.sh", get the models, for example inrectpion_v1
  3. run "python scripts/models_to_frozen_graphs.py", convert models to frozen graphs, for exammple inception_v1.pb
  4. run "python3 scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float"
    Failed, with below warning and error message.
    Then, how to fix this issue?

nano@nano-2:~/work/tf_to_trt_image_classification$ python3 scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float
......

Using output node InceptionV1/Logits/SpatialSqueeze
Converting to UFF graph
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5b/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4f/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4e/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4d/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
...
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
No. nodes: 486
UFF Output written to data/tmp.uff
UffParser: Validator error: InceptionV1/InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
Failed to parse UFF

@flycat0101
Copy link
Author

import tensorrt as trt
trt.version
'5.1.6.1'

@while0l1
Copy link

I got the same problem

1 similar comment
@lovejing0306
Copy link

I got the same problem

@xiaodi68
Copy link

xiaodi68 commented Jul 6, 2020

Hello:
I got the similar issue when converting a pb to trt plan. Have you guys got solution?
Thanks

Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_88/ones_like as custom op: Fill
Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_86/ones_like as custom op: Fill
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['sequential_1/dense_2/Sigmoid'] as outputs
No. nodes: 981
UFF Output written to data/tmp.uff
UffParser: Validator error: batch_normalization_94/ones_like: Unsupported operation _Fill
Failed to parse UFF

@lovejing0306
Copy link

Hello:
I got the similar issue when converting a pb to trt plan. Have you guys got solution?
Thanks

Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_88/ones_like as custom op: Fill
Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_86/ones_like as custom op: Fill
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['sequential_1/dense_2/Sigmoid'] as outputs
No. nodes: 981
UFF Output written to data/tmp.uff
UffParser: Validator error: batch_normalization_94/ones_like: Unsupported operation _Fill
Failed to parse UFF

you can try convert the model by tf->onnx->tensorrt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants