From ac3b50f16977593e6b01de21093e3f3a377abd2b Mon Sep 17 00:00:00 2001 From: ywang103-amd Date: Thu, 13 Feb 2025 16:47:25 -0500 Subject: [PATCH] Spatial multiplexing: part2, analysis(merge to develop) (#542) * External CI: rename pipeline to rocprofiler-compute (#463) Signed-off-by: Daniel Su * Update webui branding (#459) * Update name and icon for browser tab to rocprofiler-compute. Signed-off-by: xuchen-amd * Update name and icon for browser tab to rocprofiler-compute. Signed-off-by: xuchen-amd --------- Signed-off-by: xuchen-amd * Update branding in documentation (#442) * find/replace Omniperf to ROCm Compute Profiler Signed-off-by: Peter Park * update name in Sphinx conf Signed-off-by: Peter Park * mv what-is-omniperf.rst -> what-is-rocprof-compute.rst Signed-off-by: Peter Park * update Tutorials section Signed-off-by: Peter Park * add Omniperf as keyword to Conceptual section for internal search Signed-off-by: Peter Park * update Reference section Signed-off-by: Peter Park * black fmt conf.py Signed-off-by: Peter Park * update profile mode and basic usage subsections Signed-off-by: Peter Park * update how to use analyze mode subsection Signed-off-by: Peter Park * update install section Signed-off-by: Peter Park * fix sphinx warnings Signed-off-by: Peter Park * fix cmd line examples in profile/mode.rst Signed-off-by: Peter Park * update install decision tree image Signed-off-by: Peter Park * fix TOC and index Signed-off-by: Peter Park fix weird wording * fix cli text: deriving rocprofiler-compute metrics... Signed-off-by: Peter Park * update standalone-gui.rst Signed-off-by: Peter Park * restore removed doc updates from #428 Signed-off-by: Peter Park * update ref to Omniperf in index.rst Signed-off-by: Peter Park * fix grafana connection name to match image Signed-off-by: Peter Park * update cmds in tutorials Signed-off-by: Peter Park --------- Signed-off-by: Peter Park * MI300 roofline enablement in rocprofiler-compute (#470) * MI300 roofline enablement in rocprofiler-compute requirements.txt - running some modules complained about numpy version too new, adding extra requirement that numpy be 1.x pmc_roof_perf.txt - adding TCC_BUBBLE_sum counter to profile soc_gfx940.py soc_gfx941.py soc_gfx942.py - remove console logs reading that roofline is temporarily disabled, uncommenting blocks that check for roofline csv and run roofline post-processing roofline_calc.py - add mi300 to supported soc - add new calculation for hbm_data for MI300 using tcc_bubble_sum, checks if counter > 0 to use - add to a few comments roofline-ubuntu-20_04-mi300-rocm6 - binary for the ubuntu systems to enable mi300 roofline calculations from rocm-amdgpu-bench Note- other distros will get roofline bins to enable mi300, but need to be further tested before putting into branch. Signed-off-by: Carrie Fallows * Reformatting roofline_calc.py Signed-off-by: Carrie Fallows --------- Signed-off-by: Carrie Fallows * Update Python format checker (#471) * Add pre commit hook for Python formatting Signed-off-by: coleramos425 * Update formatting workflow to run on latest Python and add isort formatter Signed-off-by: coleramos425 * Fix caught yaml formatting issues * Update pyproject file * Add pre-commit hook instruction to CONTRIBUTING guide * Remove target-version from black pyproject.toml * Fixed formatting errors found with black and isort Signed-off-by: David Galiffi * Run hook: Whitespaces, fix end of file spaces --------- Signed-off-by: coleramos425 Signed-off-by: David Galiffi Co-authored-by: David Galiffi * Bump cryptography from 43.0.0 to 43.0.1 in /docs/sphinx (#473) Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.0 to 43.0.1. - [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst) - [Commits](https://github.com/pyca/cryptography/compare/43.0.0...43.0.1) --- updated-dependencies: - dependency-name: cryptography dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix file permission on MI300 roofline binary (#477) Signed-off-by: David Galiffi * Removing numpy requirements of <2 (#478) Checks are failing if version too high and no need for lower version Signed-off-by: Carrie Fallows * Fix crash when loading web UI roofline for gfx942 (#479) * Fix crash when loading web UI roofline for gfx942 * Fix formatting Signed-off-by: benrichard-amd * Make same changs for gfx940, gfx942. Signed-off-by: benrichard-amd * Fix formatting in soc_gfx940 and soc_gfx941. Signed-off-by: benrichard-amd --------- Signed-off-by: benrichard-amd * Rebranding name change patch (#469) * Patch in missed name change for rebranding. Signed-off-by: xuchen-amd * Patch in missed name change for rebranding. Signed-off-by: xuchen-amd --------- Signed-off-by: xuchen-amd * Move dependabot.yml to .github/ and bump rocm-docs-core (#481) * Move dependabot.yml to .github/ * Bump rocm-docs-core to 1.8.5 * Bump rocm-docs-core to 1.9.0 * Fix packaging for upgrading (#486) Specify that "rocprofiler-compute" replaces / obsoletes the "omniperf" package. * Renamed extension path from omniperf to rocprofiler_compute (#487) Signed-off-by: Tim Gu * MI300 rhel and sles roofline binaries (#480) * Roofline bins for MI300 on rhel and sles distributions Built from rocm-amdgpu-bench, tested on respective distro systems with MI300 hardware. Signed-off-by: Carrie Fallows * Minor modifications removing hardcoded variables in roofline files. Signed-off-by: Carrie Fallows --------- Signed-off-by: Carrie Fallows * Modify test_profile_general.py ctest to include MI300 enablement (#498) Signed-off-by: Carrie Fallows * part 1 to support rocprofv3 (#492) * rocprofv3 support initial commit -Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity is exceeded and the output CSV file format is different from v1/v2. * Add rocprofv3 detection so v2 can still be used It's hacky but it'll do for now. * Add code path to convert rocprofv3 JSON output into CSV * Grab correct value for Queue ID * Use _sum suffix to sum TCC counters Previously we were specifying each channel for TCC counters. rocprofv3 does not support specifing each TCC channel, and instead will auto sum given the TCC counter name. The counter name with the _sum suffix is also supported and is also supported in v1 and v2. So we will use the TCC counter name with the _sum suffix. * Fix incorrect counter outputs when using rocprofv3 In the JSON output some counters appear multime times and must be summed to get the correct value. These summed values match the rocprofv3 output in CSV mode and also match the rocprofv2 output. * Remove duplicate Correlation_ID and Wave_Size in output * Handle json output that does not contain any dispatches Omniperf was assuming each JSON output from rocprofv3 would always contain dispatches. This is not the case. For example, in a multi-process workload where one of the processes does not dispatch any kernels. A JSON file will still be output for this process but it will not contain any dispatches. * Code cleanup * Update search path for rocprofv3 results Rocprofv3 was updated to include the hostname in the path where it outputs results. * Handle accumulate counters In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte counters. v3 does not have this. So we need to define our own counters in counter_defs.yaml. For this we use the counter name + _ACCUM, for example SQ_INSTR_LEVEL_SMEM_ACCUM. To use rocprofv3 you will need to update counter_defs.yaml to include these new counter definitions. * Use correct GPU ID When converting JSON -> CSV we were assigning node_id to GPU_ID. Since the JSON contains non-GPU devices, the node_id for GPUs might not start at 0 as expected. This commit maps the agent ID to the appropriate GPU ID. * Parse scratch memory per work item from JSON * Support rocprofv3 CSV parsing JSON decoding is very slow for large files. Include support for parsing rocprofv3 CSV output and make that the default. CSV/JSON can be toggled via the ROCPROF_OUTPUT_FORMAT environment variable e.g. ROCPROF_OUTPUT_FORMAT=csv or ROCPROF_OUTPUT_FORMAT=json * black format after merge * format isort * change return of rocprof_cmd to try to resolve test's error * hack to pick last part of rocminfo's name * debug log of hacks * Modify test_profile_general.py ctest to include MI300 enablement. Currently failing because of explicitly excluded roofline files for the soc and autofailed asserts for roof-only tests- originally in place because roofline was not enabled on mi300 yet. Signed-off-by: Carrie Fallows * black and isort formated * corrected line of copyright --------- Signed-off-by: Carrie Fallows Co-authored-by: benrichard-amd Co-authored-by: YANG WANG Co-authored-by: Carrie Fallows * fix for crash of timestamp of part 1 for rocprofv3 (#499) * fix the error caused by ignoring the lack of counter csv file from rocprofv3 for timestamp * isort and black formated * quick fix for gfx906 roofline (#505) * Multi node support (#503) * [CTest] Pipeline failures for MI300 (#483) * Propagate new chip_id logic to testing workflow Signed-off-by: coleramos425 * Add a debug line to tests Signed-off-by: coleramos425 * Trying to set rocprofv2 generally in CTest module Signed-off-by: coleramos425 * Remove temp debugging lines from CI Signed-off-by: coleramos425 * Add roofline entry for MI300 expected files in CI tests Signed-off-by: coleramos425 * Make num_devices modifier global in scope Signed-off-by: coleramos425 * Change kernel name in PyTest to confirm rocprofv2 bug Related to https://ontrack-internal.amd.com/browse/SWDEV-503453 Signed-off-by: coleramos425 --------- Signed-off-by: coleramos425 * Spatial-multiplexing: part 1 profiling stage (#465) * rocprofv3 support initial commit -Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity is exceeded and the output CSV file format is different from v1/v2. * Add rocprofv3 detection so v2 can still be used It's hacky but it'll do for now. * Add code path to convert rocprofv3 JSON output into CSV * Grab correct value for Queue ID * Use _sum suffix to sum TCC counters Previously we were specifying each channel for TCC counters. rocprofv3 does not support specifing each TCC channel, and instead will auto sum given the TCC counter name. The counter name with the _sum suffix is also supported and is also supported in v1 and v2. So we will use the TCC counter name with the _sum suffix. * Fix incorrect counter outputs when using rocprofv3 In the JSON output some counters appear multime times and must be summed to get the correct value. These summed values match the rocprofv3 output in CSV mode and also match the rocprofv2 output. * Remove duplicate Correlation_ID and Wave_Size in output * Handle json output that does not contain any dispatches Omniperf was assuming each JSON output from rocprofv3 would always contain dispatches. This is not the case. For example, in a multi-process workload where one of the processes does not dispatch any kernels. A JSON file will still be output for this process but it will not contain any dispatches. * Code cleanup * Update search path for rocprofv3 results Rocprofv3 was updated to include the hostname in the path where it outputs results. * Handle accumulate counters In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte counters. v3 does not have this. So we need to define our own counters in counter_defs.yaml. For this we use the counter name + _ACCUM, for example SQ_INSTR_LEVEL_SMEM_ACCUM. To use rocprofv3 you will need to update counter_defs.yaml to include these new counter definitions. * debug code * add logic code for multiplexing * minor fix * more fixes * rocprofv3 support initial commit -Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity is exceeded and the output CSV file format is different from v1/v2. * Add rocprofv3 detection so v2 can still be used It's hacky but it'll do for now. * Add code path to convert rocprofv3 JSON output into CSV * Grab correct value for Queue ID * Use _sum suffix to sum TCC counters Previously we were specifying each channel for TCC counters. rocprofv3 does not support specifing each TCC channel, and instead will auto sum given the TCC counter name. The counter name with the _sum suffix is also supported and is also supported in v1 and v2. So we will use the TCC counter name with the _sum suffix. * Fix incorrect counter outputs when using rocprofv3 In the JSON output some counters appear multime times and must be summed to get the correct value. These summed values match the rocprofv3 output in CSV mode and also match the rocprofv2 output. * Remove duplicate Correlation_ID and Wave_Size in output * Handle json output that does not contain any dispatches Omniperf was assuming each JSON output from rocprofv3 would always contain dispatches. This is not the case. For example, in a multi-process workload where one of the processes does not dispatch any kernels. A JSON file will still be output for this process but it will not contain any dispatches. * Code cleanup * Update search path for rocprofv3 results Rocprofv3 was updated to include the hostname in the path where it outputs results. * Handle accumulate counters In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte counters. v3 does not have this. So we need to define our own counters in counter_defs.yaml. For this we use the counter name + _ACCUM, for example SQ_INSTR_LEVEL_SMEM_ACCUM. To use rocprofv3 you will need to update counter_defs.yaml to include these new counter definitions. * count accu files as well * Use correct GPU ID When converting JSON -> CSV we were assigning node_id to GPU_ID. Since the JSON contains non-GPU devices, the node_id for GPUs might not start at 0 as expected. This commit maps the agent ID to the appropriate GPU ID. * fix error with csv file parse from json and merge during post-processing * implemented parsing of csv files from v3 output for optimization * Parse scratch memory per work item from JSON * Support rocprofv3 CSV parsing JSON decoding is very slow for large files. Include support for parsing rocprofv3 CSV output and make that the default. CSV/JSON can be toggled via the ROCPROF_OUTPUT_FORMAT environment variable e.g. ROCPROF_OUTPUT_FORMAT=csv or ROCPROF_OUTPUT_FORMAT=json * black format after merge * format isort * change return of rocprof_cmd to try to resolve test's error * hack to pick last part of rocminfo's name * debug log of hacks * Modify test_profile_general.py ctest to include MI300 enablement. Currently failing because of explicitly excluded roofline files for the soc and autofailed asserts for roof-only tests- originally in place because roofline was not enabled on mi300 yet. Signed-off-by: Carrie Fallows * black and isort formated * formated by isort and black * change default rocprof's output to csv * repaired crash caused by missing csv counter file when running for timestamp * change name to spatial-multiplexing from multiplexing * make necessary modification for review * set the value of spatial_multiplexing in argument defautly to None * repair the part that blocks regular pmc files' generation --------- Signed-off-by: Carrie Fallows Co-authored-by: benrichard-amd Co-authored-by: fei.zheng Co-authored-by: YANG WANG Co-authored-by: Carrie Fallows * Simple fix for gpu model value. (#508) Signed-off-by: xuchen-amd * Add FP64 to plot adhering to pdf name (#507) * Replacing FP32-only plot with an FP32&FP64 combo plot. Results will likely be negligible but the plot name indicates both should be graphed. Signed-off-by: Carrie Fallows * Remove duplicate AI plot to clean up fp32 fp64 graph Signed-off-by: Carrie Fallows --------- Signed-off-by: Carrie Fallows * Add gpu series for roofline (#510) * Add gpu_series for roofline. * Use gpu_series in path names for roofline. * Fix TCC on MI200 when introduce rocprofv3 (#509) * quick fix for v2 * one more fix * revert a bit --------- Co-authored-by: ywang103-amd * initial implementation of analyze part of spatial-multiplexing * Bump rocm-docs-core from 1.9.0 to 1.12.0 in /docs/sphinx (#511) Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.9.0 to 1.12.0. - [Release notes](https://github.com/ROCm/rocm-docs-core/releases) - [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md) - [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.9.0...v1.12.0) --- updated-dependencies: - dependency-name: rocm-docs-core dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * add exclusion for expired columns * block sanity check and set correct dir for sysinfo.csv * correct typo for fucntion merge_counters_spatial_multiplex and format the code * add call for csv merging for multiplex and modify of condition check * modify logic of create_df_pmc * Update sample roofline plot img (#516) * Modify path to use gpu_model instead of gpu_series to match other workload directory path creation/search points. Affects manual testing, does not seem to affect ctests. (#513) Signed-off-by: Carrie Fallows * Improve formatting when displaying rocprof command. (#476) * Improve formatting when displaying rocprof command. Signed-off-by: xuchen-amd * Fix python formatting. Signed-off-by: xuchen-amd * Strip unwanted characters (rocprofv1 specific) from rocprof commands. Signed-off-by: xuchen-amd * Strip unwanted characters (rocprofv1 specific) from rocprof commands. Signed-off-by: xuchen-amd * Save the unmodified arguments for rocprof for debug message display. Signed-off-by: xuchen-amd --------- Signed-off-by: xuchen-amd * resolve issue related to multi-indexing(need to resolve metric calculation next step) * quick fix for mpi_support (#518) * Pass accumulate counters to rocprofv3 using -E option (#522) rocprofv3 has a new -E option where extra counters can be passed (see accum_counters.yaml) instead of defining them in counter_defs.yaml. * Unify all file handling with pathlib (#512) * Replace occurences of os.path functions with equivalent functions from pathlib library * Remove unwanted imports of os.path and os * Add coding guidelines for using pathlib instead of os.path * Auto sync staging and mainline on a weekly cadence (#517) Signed-off-by: coleramos425 * Docker environment for testing (#515) * Add instructions on how to use Docker for manual and automatic testing * Quick fix for mpi_support (#527) * made change on the logic to add roofline file to expected csv list to fix crash on mi100 (#528) * fixed the content issue in counter columns and make analyze mode work * remove unnessaary log print * Use PAT to bypass branch protection * removed requirement for user to specify node info in argument of spatial_multiplexing and added comment on functions to merge dataframe * add log for setup_workload_dir to check failure * try to change way to know if in spatial multiplexing mode to try to resolve failure of unit test * remove unnecessary comment * add todo index to indicate fuure removal of format conversion --------- Signed-off-by: Daniel Su Signed-off-by: xuchen-amd Signed-off-by: Peter Park Signed-off-by: Carrie Fallows Signed-off-by: coleramos425 Signed-off-by: David Galiffi Signed-off-by: dependabot[bot] Signed-off-by: Carrie Fallows Signed-off-by: benrichard-amd Signed-off-by: Tim Gu Co-authored-by: Daniel Su Co-authored-by: xuchen-amd Co-authored-by: Peter Park Co-authored-by: cfallows-amd Co-authored-by: Cole Ramos Co-authored-by: David Galiffi Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Ben Richard <143630488+benrichard-amd@users.noreply.github.com> Co-authored-by: Tim Gu Co-authored-by: benrichard-amd Co-authored-by: YANG WANG Co-authored-by: Fei Zheng <44449748+feizheng10@users.noreply.github.com> Co-authored-by: fei.zheng Co-authored-by: vedithal-amd --- src/argparser.py | 8 ++ src/rocprof_compute_analyze/analysis_base.py | 13 +- src/rocprof_compute_analyze/analysis_cli.py | 6 + src/rocprof_compute_analyze/analysis_webui.py | 13 ++ src/rocprof_compute_base.py | 10 ++ src/utils/file_io.py | 47 ++++--- src/utils/utils.py | 122 ++++++++++++++++++ 7 files changed, 202 insertions(+), 17 deletions(-) diff --git a/src/argparser.py b/src/argparser.py index 534233191..159b00ab6 100644 --- a/src/argparser.py +++ b/src/argparser.py @@ -495,6 +495,14 @@ def omniarg_parser( nargs="+", help="\t\tSpecify GPU id(s) for filtering.", ) + analyze_group.add_argument( + "--spatial-multiplexing", + dest="spatial_multiplexing", + required=False, + default=False, + action="store_true", + help="\t\t\tMode of spatial multiplexing.", + ) analyze_group.add_argument( "-o", "--output", diff --git a/src/rocprof_compute_analyze/analysis_base.py b/src/rocprof_compute_analyze/analysis_base.py index e63735efb..51e7778d7 100644 --- a/src/rocprof_compute_analyze/analysis_base.py +++ b/src/rocprof_compute_analyze/analysis_base.py @@ -36,6 +36,7 @@ console_log, demarcate, is_workload_empty, + merge_counters_spatial_multiplex, ) @@ -57,6 +58,10 @@ def set_soc(self, omni_socs): def get_socs(self): return self.__socs + @demarcate + def spatial_multiplex_merge_counters(self, df): + return merge_counters_spatial_multiplex(df) + @demarcate def generate_configs(self, arch, config_dir, list_stats, filter_metrics, sys_info): single_panel_config = file_io.is_single_panel_config( @@ -142,6 +147,7 @@ def initalize_runs(self, normalization_filter=None): sysinfo_path = ( Path(d[0]) if self.__args.nodes is None + and self.__args.spatial_multiplexing is not True else file_io.find_1st_sub_dir(d[0]) ) sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv")) @@ -166,6 +172,7 @@ def initalize_runs(self, normalization_filter=None): sysinfo_path = ( Path(d[0]) if self.__args.nodes is None + and self.__args.spatial_multiplexing is not True else file_io.find_1st_sub_dir(d[0]) ) w.sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv")) @@ -199,7 +206,11 @@ def sanitize(self): # validate profiling data # Todo: more err check - if not (self.__args.nodes != None or self.__args.list_nodes): + if not ( + self.__args.nodes != None + or self.__args.list_nodes + or self.__args.spatial_multiplexing + ): is_workload_empty(dir[0]) # else: diff --git a/src/rocprof_compute_analyze/analysis_cli.py b/src/rocprof_compute_analyze/analysis_cli.py index 95d7f24a5..facfad01f 100644 --- a/src/rocprof_compute_analyze/analysis_cli.py +++ b/src/rocprof_compute_analyze/analysis_cli.py @@ -44,10 +44,16 @@ def pre_processing(self): self._runs[d[0]].raw_pmc = file_io.create_df_pmc( d[0], self.get_args().nodes, + self.get_args().spatial_multiplexing, self.get_args().kernel_verbose, self.get_args().verbose, ) + if self.get_args().spatial_multiplexing: + self._runs[d[0]].raw_pmc = self.spatial_multiplex_merge_counters( + self._runs[d[0]].raw_pmc + ) + file_io.create_df_kernel_top_stats( df_in=self._runs[d[0]].raw_pmc, raw_data_dir=d[0], diff --git a/src/rocprof_compute_analyze/analysis_webui.py b/src/rocprof_compute_analyze/analysis_webui.py index f8d6b3bfb..58336d02c 100644 --- a/src/rocprof_compute_analyze/analysis_webui.py +++ b/src/rocprof_compute_analyze/analysis_webui.py @@ -113,9 +113,16 @@ def generate_from_filter( base_data[base_run].raw_pmc = file_io.create_df_pmc( self.dest_dir, self.get_args().nodes, + self.get_args().spatial_multiplexing, self.get_args().kernel_verbose, self.get_args().verbose, ) + + if self.get_args().spatial_multiplexing: + base_data[base_run].raw_pmc = self.spatial_multiplex_merge_counters( + base_data[base_run].raw_pmc + ) + console_debug("analysis", "gui dispatch filter is %s" % disp_filt) console_debug("analysis", "gui kernel filter is %s" % kernel_filter) console_debug("analysis", "gui gpu filter is %s" % gcd_filter) @@ -290,6 +297,12 @@ def pre_processing(self): self.get_args().kernel_verbose, args.verbose, ) + + if self.get_args().spatial_multiplexing: + self._runs[self.dest_dir].raw_pmc = self.spatial_multiplex_merge_counters( + self._runs[self.dest_dir].raw_pmc + ) + file_io.create_df_kernel_top_stats( df_in=self._runs[self.dest_dir].raw_pmc, raw_data_dir=self.dest_dir, diff --git a/src/rocprof_compute_base.py b/src/rocprof_compute_base.py index 65e7cf631..c40748233 100644 --- a/src/rocprof_compute_base.py +++ b/src/rocprof_compute_base.py @@ -219,6 +219,15 @@ def parse_args(self): p.mkdir(parents=True, exist_ok=False) except FileExistsError: console_error("Directory already exists.") + + elif self.__args.mode == "analyze": + # block all filters during spatial-multiplexing + if self.__args.spatial_multiplexing: + self.__args.gpu_id = None + self.__args.gpu_kernel = None + self.__args.gpu_dispatch_id = None + self.__args.nodes = None + return @demarcate @@ -342,6 +351,7 @@ def run_analysis(self): sysinfo_path = ( Path(d[0]) if analyzer.get_args().nodes is None + and analyzer.get_args().spatial_multiplexing is not True else file_io.find_1st_sub_dir(d[0]) ) sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv")) diff --git a/src/utils/file_io.py b/src/utils/file_io.py index 5632e9019..6de537def 100644 --- a/src/utils/file_io.py +++ b/src/utils/file_io.py @@ -176,7 +176,9 @@ def create_df_kernel_top_stats( @demarcate -def create_df_pmc(raw_data_root_dir, nodes, kernel_verbose, verbose): +def create_df_pmc( + raw_data_root_dir, nodes, spatial_multiplexing, kernel_verbose, verbose +): """ Load all raw pmc counters and join into one df. """ @@ -212,12 +214,7 @@ def create_single_df_pmc(raw_data_dir, node_name, kernel_verbose, verbose): console_debug("pmc_raw_data final_single_df %s" % final_df.info) return final_df - # regular single node case - if nodes is None: - return create_single_df_pmc(raw_data_root_dir, None, kernel_verbose, verbose) - - # "empty list" means all nodes - elif not nodes: + if spatial_multiplexing: df = pd.DataFrame() # todo: more err check for subdir in Path(raw_data_root_dir).iterdir(): @@ -230,15 +227,33 @@ def create_single_df_pmc(raw_data_dir, node_name, kernel_verbose, verbose): # specified node list else: - df = pd.DataFrame() - # todo: more err check - for subdir in nodes: - p = Path(raw_data_root_dir) - new_df = create_single_df_pmc( - p.joinpath(subdir), subdir, kernel_verbose, verbose - ) - df = pd.concat([df, new_df]) - return df + # regular single node case + if nodes is None: + return create_single_df_pmc(raw_data_root_dir, None, kernel_verbose, verbose) + + # "empty list" means all nodes + elif not nodes: + df = pd.DataFrame() + # todo: more err check + for subdir in Path(raw_data_root_dir).iterdir(): + if subdir.is_dir(): + new_df = create_single_df_pmc( + subdir, str(subdir.name), kernel_verbose, verbose + ) + df = pd.concat([df, new_df]) + return df + + # specified node list + else: + df = pd.DataFrame() + # todo: more err check + for subdir in nodes: + p = Path(raw_data_root_dir) + new_df = create_single_df_pmc( + p.joinpath(subdir), subdir, kernel_verbose, verbose + ) + df = pd.concat([df, new_df]) + return df def collect_wave_occu_per_cu(in_dir, out_dir, numSE): diff --git a/src/utils/utils.py b/src/utils/utils.py index 929767b76..e0239d563 100644 --- a/src/utils/utils.py +++ b/src/utils/utils.py @@ -1091,3 +1091,125 @@ def set_locale_encoding(): exit=False, ) console_error(error) + + +def reverse_multi_index_df_pmc(final_df): + """ + Util function to decompose multi-index dataframe. + """ + # Check if the columns have more than one level + if len(final_df.columns.levels) < 2: + raise ValueError("Input DataFrame does not have a multi-index column.") + + # Extract the first level of the MultiIndex columns (the file names) + coll_levels = final_df.columns.get_level_values(0).unique().tolist() + + # Initialize the list of DataFrames + dfs = [] + + # Loop through each 'coll_level' and rebuild the DataFrames + for level in coll_levels: + # Select columns that belong to the current 'coll_level' + columns_for_level = final_df.xs(level, axis=1, level=0) + + # Append the DataFrame for this level + dfs.append(columns_for_level) + + # Return the list of DataFrames and the column levels + return dfs, coll_levels + + +def merge_counters_spatial_multiplex(df_multi_index): + """ + For spatial multiplexing, this merges counter values for the same kernel that runs on different devices. For time stamp, start time stamp will use median while for end time stamp, it will be equal to the summation between median start stamp and median delta time. + """ + non_counter_column_index = [ + "Dispatch_ID", + "GPU_ID", + "Queue_ID", + "PID", + "TID", + "Grid_Size", + "Workgroup_Size", + "LDS_Per_Workgroup", + "Scratch_Per_Workitem", + "Arch_VGPR", + "Accum_VGPR", + "SGPR", + "Wave_Size", + "Kernel_Name", + "Start_Timestamp", + "End_Timestamp", + "Correlation_ID", + "Kernel_ID", + "Node", + ] + + expired_column_index = [ + "Node", + "PID", + "TID", + "Queue_ID", + ] + + result_dfs = [] + + # TODO: will need optimize to avoid this convertion to single index format and do merge directly on multi-index dataframe + dfs, coll_levels = reverse_multi_index_df_pmc(df_multi_index) + + for df in dfs: + kernel_name_column_name = "Kernel_Name" + if not "Kernel_Name" in df and "Name" in df: + kernel_name_column_name = "Name" + + # Find the values in Kernel_Name that occur more than once + kernel_single_occurances = df[kernel_name_column_name].value_counts().index + + # Define a list to store the merged rows + result_data = [] + + for kernel_name in kernel_single_occurances: + # Get all rows for the current kernel_name + group = df[df[kernel_name_column_name] == kernel_name] + + # Create a dictionary to store the merged row for the current group + merged_row = {} + + # Process non-counter columns + for col in [ + col for col in non_counter_column_index if col not in expired_column_index + ]: + if col == "Start_Timestamp": + # For Start_Timestamp, take the median + merged_row[col] = group["Start_Timestamp"].median() + elif col == "End_Timestamp": + # For End_Timestamp, calculate the median delta time + delta_time = group["End_Timestamp"] - group["Start_Timestamp"] + median_delta_time = delta_time.median() + merged_row[col] = merged_row["Start_Timestamp"] + median_delta_time + else: + # For other non-counter columns, take the first occurrence (0th row) + merged_row[col] = group.iloc[0][col] + + # Process counter columns (assumed to be all columns not in non_counter_column_index) + counter_columns = [ + col for col in group.columns if col not in non_counter_column_index + ] + for counter_col in counter_columns: + # for counter columns, take the first non-none (or non-nan) value + current_valid_counter_group = group[group[counter_col].notna()] + first_valid_value = ( + current_valid_counter_group.iloc[0][counter_col] + if len(current_valid_counter_group) > 0 + else None + ) + merged_row[counter_col] = first_valid_value + + # Append the merged row to the result list + result_data.append(merged_row) + + # Create a new DataFrame from the merged rows + result_dfs.append(pd.DataFrame(result_data)) + + final_df = pd.concat(result_dfs, keys=coll_levels, axis=1, copy=False) + return final_df