Skip to content

Update the CM MLPerf inference docs for CUDA device running on host #21

Open
@anandhu-eng

Description

@anandhu-eng

Migrated from mlcommons/mlperf-automations_archived#14
Originally created by @arjunsuresh on Fri, 27 Sep 2024 10:49:33 GMT


We need to update the MLPerf inference docs for native CUDA runs

  1. Add a remark that unless CUDA, cuDNN and TensorRT are available in the environment it is recommended to use the docker option
  2. In the run options specify the flags to pass in the cuDNN and TensorRT run files

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions