Open
Description
Migrated from mlcommons/mlperf-automations_archived#14
Originally created by @arjunsuresh on Fri, 27 Sep 2024 10:49:33 GMT
We need to update the MLPerf inference docs for native CUDA runs
- Add a remark that unless CUDA, cuDNN and TensorRT are available in the environment it is recommended to use the docker option
- In the run options specify the flags to pass in the cuDNN and TensorRT run files