Skip to content

Export from .pt to .onnx for YOLOV8 fails #28

@micahdbak

Description

@micahdbak

I'm following the steps for the YOLOv8 pose model for use with DeepStream 7.1 on the Jetson ORIN AGX, however when I attempt to export the latest yolov8s-pose.pt model for use with the deepstream app, I get the following exception(s):

python3 export_yoloV8_pose.py -w yolov8s-pose.pt --dynamic

Starting: yolov8s-pose.pt
Opening YOLOv8-Pose model
YOLOv8s-pose summary (fused): 81 layers, 11,615,724 parameters, 0 gradients, 30.2 GFLOPs
Creating labels.txt file
Exporting the model to ONNX
W1123 17:49:44.350000 14640 venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 17 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
Traceback (most recent call last):
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1416, in export
    decomposed_program = _prepare_exported_program_for_export(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 980, in _prepare_exported_program_for_export
    exported_program = _fx_passes.decompose_with_registry(exported_program, registry)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_fx_passes.py", line 19, in decompose_with_registry
    return exported_program.run_decompositions(decomp_table)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/export/exported_program.py", line 1484, in run_decompositions
    return _decompose_exported_program(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/export/exported_program.py", line 967, in _decompose_exported_program
    ) = _decompose_and_get_gm_with_new_signature_constants(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/export/exported_program.py", line 476, in _decompose_and_get_gm_with_new_signature_constants
    aten_export_artifact = _export_to_aten_ir(
                           ^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/export/_trace.py", line 877, in _export_to_aten_ir
    gm, graph_signature = transform(aot_export_module)(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1444, in aot_export_module
    fx_g, metadata, in_spec, out_spec = _aot_export_function(
                                        ^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1694, in _aot_export_function
    aot_state = create_aot_state(
                ^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 567, in create_aot_state
    fw_metadata = run_functionalized_fw_and_collect_metadata(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 207, in inner
    flat_f_outs = f(*flat_f_args)
                  ^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 187, in flat_fn
    tree_out = fn(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/graph_capture_wrappers.py", line 1350, in functional_call
    out = PropagateUnbackedSymInts(mod).run(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/fx/interpreter.py", line 174, in run
    self.env[node] = self.run_node(node)
                     ^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 7867, in run_node
    rebind_unbacked(fake_mode.shape_env, n, result)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/fx/experimental/symbolic_shapes.py", line 602, in rebind_unbacked
    if u1.node.hint is not None:
       ^^^^^^^
AttributeError: 'float' object has no attribute 'node'

While executing %item : [num_users=1] = call_function[target=torch.ops.aten.item.default](args = (%getitem_16,), kwargs = {})
Original traceback:
File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/nn/modules/container.py", line 250, in forward
    input = module(input)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/tasks.py", line 137, in forward
    return self.predict(x, *args, **kwargs)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/modules/head.py", line 359, in forward
    x = Detect.forward(self, x)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/modules/head.py", line 123, in forward
    y = self._inference(x)
Use tlparse to see full graph. (https://github.com/pytorch/tlparse?tab=readme-ov-file#tlparse-parse-structured-pt2-logs)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/export_yoloV8_pose.py", line 139, in <module>
    main(args)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/export_yoloV8_pose.py", line 98, in main
    torch.onnx.export(
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/__init__.py", line 296, in export
    return _compat.export_compat(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py", line 143, in export_compat
    onnx_program = _core.export(
                   ^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_flags.py", line 23, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1444, in export
    raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Failed to decompose the FX graph for ONNX compatibility. This is step 2/3 of exporting the model to ONNX. Next steps:
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
- Create an error report with `torch.onnx.export(..., report=True)`, and save the ExportedProgram as a pt2 file. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the error report and the pt2 model.

## Exception summary

<class 'AttributeError'>: 'float' object has no attribute 'node'

While executing %item : [num_users=1] = call_function[target=torch.ops.aten.item.default](args = (%getitem_16,), kwargs = {})
Original traceback:
File "/home/robot/DeepStream-Yolo-Pose/ultralytics/venv/lib/python3.12/site-packages/torch/nn/modules/container.py", line 250, in forward
    input = module(input)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/tasks.py", line 137, in forward
    return self.predict(x, *args, **kwargs)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/modules/head.py", line 359, in forward
    x = Detect.forward(self, x)
  File "/home/robot/DeepStream-Yolo-Pose/ultralytics/ultralytics/nn/modules/head.py", line 123, in forward
    y = self._inference(x)
Use tlparse to see full graph. (https://github.com/pytorch/tlparse?tab=readme-ov-file#tlparse-parse-structured-pt2-logs)

(Refer to the full stack trace above for more information.)

I am uncertain if this is a problem with my setup or perhaps the utility has become outdated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions