Skip to content

Commit

Permalink
Spatial multiplexing: part2, analysis(merge to develop) (#542)
Browse files Browse the repository at this point in the history
* External CI: rename pipeline to rocprofiler-compute (#463)

Signed-off-by: Daniel Su <[email protected]>

* Update webui branding (#459)

* Update name and icon for browser tab to rocprofiler-compute.

Signed-off-by: xuchen-amd <[email protected]>

* Update name and icon for browser tab to rocprofiler-compute.

Signed-off-by: xuchen-amd <[email protected]>

---------

Signed-off-by: xuchen-amd <[email protected]>

* Update branding in documentation (#442)

* find/replace Omniperf to ROCm Compute Profiler

Signed-off-by: Peter Park <[email protected]>

* update name in Sphinx conf

Signed-off-by: Peter Park <[email protected]>

* mv what-is-omniperf.rst -> what-is-rocprof-compute.rst

Signed-off-by: Peter Park <[email protected]>

* update Tutorials section

Signed-off-by: Peter Park <[email protected]>

* add Omniperf as keyword to Conceptual section for internal search

Signed-off-by: Peter Park <[email protected]>

* update Reference section

Signed-off-by: Peter Park <[email protected]>

* black fmt conf.py

Signed-off-by: Peter Park <[email protected]>

* update profile mode and basic usage subsections

Signed-off-by: Peter Park <[email protected]>

* update how to use analyze mode subsection

Signed-off-by: Peter Park <[email protected]>

* update install section

Signed-off-by: Peter Park <[email protected]>

* fix sphinx warnings

Signed-off-by: Peter Park <[email protected]>

* fix cmd line examples in profile/mode.rst

Signed-off-by: Peter Park <[email protected]>

* update install decision tree image

Signed-off-by: Peter Park <[email protected]>

* fix TOC and index

Signed-off-by: Peter Park <[email protected]>

fix weird wording

* fix cli text: deriving rocprofiler-compute metrics...

Signed-off-by: Peter Park <[email protected]>

* update standalone-gui.rst

Signed-off-by: Peter Park <[email protected]>

* restore removed doc updates from #428

Signed-off-by: Peter Park <[email protected]>

* update ref to Omniperf in index.rst

Signed-off-by: Peter Park <[email protected]>

* fix grafana connection name to match image

Signed-off-by: Peter Park <[email protected]>

* update cmds in tutorials

Signed-off-by: Peter Park <[email protected]>

---------

Signed-off-by: Peter Park <[email protected]>

* MI300 roofline enablement in rocprofiler-compute (#470)

* MI300 roofline enablement in rocprofiler-compute

requirements.txt
- running some modules complained about numpy version too new, adding extra requirement that numpy be 1.x
pmc_roof_perf.txt
- adding TCC_BUBBLE_sum counter to profile
soc_gfx940.py
soc_gfx941.py
soc_gfx942.py
- remove console logs reading that roofline is temporarily disabled, uncommenting blocks that check for roofline csv and run roofline post-processing
roofline_calc.py
- add mi300 to supported soc
- add new calculation for hbm_data for MI300 using tcc_bubble_sum, checks if counter > 0 to use
- add to a few comments
roofline-ubuntu-20_04-mi300-rocm6
- binary for the ubuntu systems to enable mi300 roofline calculations from rocm-amdgpu-bench

Note- other distros will get roofline bins to enable mi300, but need to be further tested before putting into branch.

Signed-off-by: Carrie Fallows <[email protected]>

* Reformatting roofline_calc.py

Signed-off-by: Carrie Fallows <[email protected]>

---------

Signed-off-by: Carrie Fallows <[email protected]>

* Update Python format checker (#471)

* Add pre commit hook for Python formatting

Signed-off-by: coleramos425 <[email protected]>

* Update formatting workflow to run on latest Python and add isort formatter

Signed-off-by: coleramos425 <[email protected]>

* Fix caught yaml formatting issues

* Update pyproject file

* Add pre-commit hook instruction to CONTRIBUTING guide

* Remove target-version from black pyproject.toml

* Fixed formatting errors found with black and isort

Signed-off-by: David Galiffi <[email protected]>

* Run hook: Whitespaces, fix end of file spaces

---------

Signed-off-by: coleramos425 <[email protected]>
Signed-off-by: David Galiffi <[email protected]>
Co-authored-by: David Galiffi <[email protected]>

* Bump cryptography from 43.0.0 to 43.0.1 in /docs/sphinx (#473)

Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.0 to 43.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@43.0.0...43.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Fix file permission on MI300 roofline binary (#477)

Signed-off-by: David Galiffi <[email protected]>

* Removing numpy requirements of <2 (#478)

Checks are failing if version too high and no need for lower version

Signed-off-by: Carrie Fallows <[email protected]>

* Fix crash when loading web UI roofline for gfx942 (#479)

* Fix crash when loading web UI roofline for gfx942

* Fix formatting

Signed-off-by: benrichard-amd <[email protected]>

* Make same changs for gfx940, gfx942.

Signed-off-by: benrichard-amd <[email protected]>

* Fix formatting in soc_gfx940 and soc_gfx941.

Signed-off-by: benrichard-amd <[email protected]>

---------

Signed-off-by: benrichard-amd <[email protected]>

* Rebranding name change patch (#469)

* Patch in missed name change for rebranding.

Signed-off-by: xuchen-amd <[email protected]>

* Patch in missed name change for rebranding.

Signed-off-by: xuchen-amd <[email protected]>

---------

Signed-off-by: xuchen-amd <[email protected]>

* Move dependabot.yml to .github/ and bump rocm-docs-core (#481)

* Move dependabot.yml to .github/

* Bump rocm-docs-core to 1.8.5

* Bump rocm-docs-core to 1.9.0

* Fix packaging for upgrading (#486)

Specify that "rocprofiler-compute" replaces / obsoletes the "omniperf" package.

* Renamed extension path from omniperf to rocprofiler_compute (#487)

Signed-off-by: Tim Gu <[email protected]>

* MI300 rhel and sles roofline binaries (#480)

* Roofline bins for MI300 on rhel and sles distributions
Built from rocm-amdgpu-bench, tested on respective distro systems with MI300 hardware.

Signed-off-by: Carrie Fallows <[email protected]>

* Minor modifications removing hardcoded variables in roofline files.

Signed-off-by: Carrie Fallows <[email protected]>

---------

Signed-off-by: Carrie Fallows <[email protected]>

* Modify test_profile_general.py ctest to include MI300 enablement (#498)

Signed-off-by: Carrie Fallows <[email protected]>

* part 1 to support rocprofv3 (#492)

* rocprofv3 support initial commit

-Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity
is exceeded and the output CSV file format is different from v1/v2.

* Add rocprofv3 detection so v2 can still be used

It's hacky but it'll do for now.

* Add code path to convert rocprofv3 JSON output into CSV

* Grab correct value for Queue ID

* Use _sum suffix to sum TCC counters

Previously we were specifying each channel for TCC counters. rocprofv3 does
not support specifing each TCC channel, and instead will auto sum given
the TCC counter name. The counter name with the _sum suffix is also
supported and is also supported in v1 and v2. So we will use the TCC
counter name with the _sum suffix.

* Fix incorrect counter outputs when using rocprofv3

In the JSON output some counters appear multime times and must be
summed to get the correct value. These summed values match the
rocprofv3 output in CSV mode and also match the rocprofv2
output.

* Remove duplicate Correlation_ID and Wave_Size in output

* Handle json output that does not contain any dispatches

Omniperf was assuming each JSON output from rocprofv3 would always contain
dispatches. This is not the case. For example, in a multi-process
workload where one of the processes does not dispatch any kernels. A JSON
file will still be output for this process but it will not contain any dispatches.

* Code cleanup

* Update search path for rocprofv3 results

Rocprofv3 was updated to include the hostname in the path where
it outputs results.

* Handle accumulate counters

In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte
counters. v3 does not have this. So we need to define our own counters
in counter_defs.yaml. For this we use the counter name + _ACCUM, for
example SQ_INSTR_LEVEL_SMEM_ACCUM.

To use rocprofv3 you will need to update counter_defs.yaml to include
these new counter definitions.

* Use correct GPU ID

When converting JSON -> CSV we were assigning node_id to GPU_ID. Since
the JSON contains non-GPU devices, the node_id for GPUs might not
start at 0 as expected.

This commit maps the agent ID to the appropriate GPU ID.

* Parse scratch memory per work item from JSON

* Support rocprofv3 CSV parsing

JSON decoding is very slow for large files. Include support for parsing
rocprofv3 CSV output and make that the default.

CSV/JSON can be toggled via the ROCPROF_OUTPUT_FORMAT environment
variable e.g. ROCPROF_OUTPUT_FORMAT=csv or ROCPROF_OUTPUT_FORMAT=json

* black format after merge

* format isort

* change return of rocprof_cmd to try to resolve test's error

* hack to pick last part of rocminfo's name

* debug log of hacks

* Modify test_profile_general.py ctest to include MI300 enablement. Currently failing because of explicitly excluded roofline files for the soc and autofailed asserts for roof-only tests- originally in place because roofline was not enabled on mi300 yet.

Signed-off-by: Carrie Fallows <[email protected]>

* black and isort formated

* corrected line of copyright

---------

Signed-off-by: Carrie Fallows <[email protected]>
Co-authored-by: benrichard-amd <[email protected]>
Co-authored-by: YANG WANG <[email protected]>
Co-authored-by: Carrie Fallows <[email protected]>

* fix for crash of timestamp of part 1 for rocprofv3 (#499)

* fix the error caused by ignoring the lack of counter csv file from rocprofv3 for timestamp

* isort and black formated

* quick fix for gfx906 roofline (#505)

* Multi node support (#503)

* [CTest] Pipeline failures for MI300 (#483)

* Propagate new chip_id logic to testing workflow

Signed-off-by: coleramos425 <[email protected]>

* Add a debug line to tests

Signed-off-by: coleramos425 <[email protected]>

* Trying to set rocprofv2 generally in CTest module

Signed-off-by: coleramos425 <[email protected]>

* Remove temp debugging lines from CI

Signed-off-by: coleramos425 <[email protected]>

* Add roofline entry for MI300 expected files in CI tests

Signed-off-by: coleramos425 <[email protected]>

* Make num_devices modifier global in scope

Signed-off-by: coleramos425 <[email protected]>

* Change kernel name in PyTest to confirm rocprofv2 bug

Related to https://ontrack-internal.amd.com/browse/SWDEV-503453

Signed-off-by: coleramos425 <[email protected]>

---------

Signed-off-by: coleramos425 <[email protected]>

* Spatial-multiplexing: part 1 profiling stage (#465)

* rocprofv3 support initial commit

-Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity
is exceeded and the output CSV file format is different from v1/v2.

* Add rocprofv3 detection so v2 can still be used

It's hacky but it'll do for now.

* Add code path to convert rocprofv3 JSON output into CSV

* Grab correct value for Queue ID

* Use _sum suffix to sum TCC counters

Previously we were specifying each channel for TCC counters. rocprofv3 does
not support specifing each TCC channel, and instead will auto sum given
the TCC counter name. The counter name with the _sum suffix is also
supported and is also supported in v1 and v2. So we will use the TCC
counter name with the _sum suffix.

* Fix incorrect counter outputs when using rocprofv3

In the JSON output some counters appear multime times and must be
summed to get the correct value. These summed values match the
rocprofv3 output in CSV mode and also match the rocprofv2
output.

* Remove duplicate Correlation_ID and Wave_Size in output

* Handle json output that does not contain any dispatches

Omniperf was assuming each JSON output from rocprofv3 would always contain
dispatches. This is not the case. For example, in a multi-process
workload where one of the processes does not dispatch any kernels. A JSON
file will still be output for this process but it will not contain any dispatches.

* Code cleanup

* Update search path for rocprofv3 results

Rocprofv3 was updated to include the hostname in the path where
it outputs results.

* Handle accumulate counters

In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte
counters. v3 does not have this. So we need to define our own counters
in counter_defs.yaml. For this we use the counter name + _ACCUM, for
example SQ_INSTR_LEVEL_SMEM_ACCUM.

To use rocprofv3 you will need to update counter_defs.yaml to include
these new counter definitions.

* debug code

* add logic code for multiplexing

* minor fix

* more fixes

* rocprofv3 support initial commit

-Can run rocprofv3 but ultimately fails. rocprofv3 says the counter capacity
is exceeded and the output CSV file format is different from v1/v2.

* Add rocprofv3 detection so v2 can still be used

It's hacky but it'll do for now.

* Add code path to convert rocprofv3 JSON output into CSV

* Grab correct value for Queue ID

* Use _sum suffix to sum TCC counters

Previously we were specifying each channel for TCC counters. rocprofv3 does
not support specifing each TCC channel, and instead will auto sum given
the TCC counter name. The counter name with the _sum suffix is also
supported and is also supported in v1 and v2. So we will use the TCC
counter name with the _sum suffix.

* Fix incorrect counter outputs when using rocprofv3

In the JSON output some counters appear multime times and must be
summed to get the correct value. These summed values match the
rocprofv3 output in CSV mode and also match the rocprofv2
output.

* Remove duplicate Correlation_ID and Wave_Size in output

* Handle json output that does not contain any dispatches

Omniperf was assuming each JSON output from rocprofv3 would always contain
dispatches. This is not the case. For example, in a multi-process
workload where one of the processes does not dispatch any kernels. A JSON
file will still be output for this process but it will not contain any dispatches.

* Code cleanup

* Update search path for rocprofv3 results

Rocprofv3 was updated to include the hostname in the path where
it outputs results.

* Handle accumulate counters

In v1/v2 rocprof uses the SQ_ACCUM_PREV_HIRES counter for the accumualte
counters. v3 does not have this. So we need to define our own counters
in counter_defs.yaml. For this we use the counter name + _ACCUM, for
example SQ_INSTR_LEVEL_SMEM_ACCUM.

To use rocprofv3 you will need to update counter_defs.yaml to include
these new counter definitions.

* count accu files as well

* Use correct GPU ID

When converting JSON -> CSV we were assigning node_id to GPU_ID. Since
the JSON contains non-GPU devices, the node_id for GPUs might not
start at 0 as expected.

This commit maps the agent ID to the appropriate GPU ID.

* fix error with csv file parse from json and merge during post-processing

* implemented parsing of csv files from v3 output for optimization

* Parse scratch memory per work item from JSON

* Support rocprofv3 CSV parsing

JSON decoding is very slow for large files. Include support for parsing
rocprofv3 CSV output and make that the default.

CSV/JSON can be toggled via the ROCPROF_OUTPUT_FORMAT environment
variable e.g. ROCPROF_OUTPUT_FORMAT=csv or ROCPROF_OUTPUT_FORMAT=json

* black format after merge

* format isort

* change return of rocprof_cmd to try to resolve test's error

* hack to pick last part of rocminfo's name

* debug log of hacks

* Modify test_profile_general.py ctest to include MI300 enablement. Currently failing because of explicitly excluded roofline files for the soc and autofailed asserts for roof-only tests- originally in place because roofline was not enabled on mi300 yet.

Signed-off-by: Carrie Fallows <[email protected]>

* black and isort formated

* formated by isort and black

* change default rocprof's output to csv

* repaired crash caused by missing csv counter file when running for timestamp

* change name to spatial-multiplexing from multiplexing

* make necessary modification for review

* set the value of spatial_multiplexing in argument defautly to None

* repair the part that blocks regular pmc files' generation

---------

Signed-off-by: Carrie Fallows <[email protected]>
Co-authored-by: benrichard-amd <[email protected]>
Co-authored-by: fei.zheng <[email protected]>
Co-authored-by: YANG WANG <[email protected]>
Co-authored-by: Carrie Fallows <[email protected]>

* Simple fix for gpu model value. (#508)

Signed-off-by: xuchen-amd <[email protected]>

* Add FP64 to plot adhering to pdf name (#507)

* Replacing FP32-only plot with an FP32&FP64 combo plot. Results will likely be negligible but the plot name indicates both should be graphed.

Signed-off-by: Carrie Fallows <[email protected]>

* Remove duplicate AI plot to clean up fp32 fp64 graph

Signed-off-by: Carrie Fallows <[email protected]>

---------

Signed-off-by: Carrie Fallows <[email protected]>

* Add gpu series for roofline (#510)

* Add gpu_series for roofline.

* Use gpu_series in path names for roofline.

* Fix  TCC on MI200 when introduce rocprofv3 (#509)

* quick fix for v2

* one more fix

* revert a bit

---------

Co-authored-by: ywang103-amd <[email protected]>

* initial implementation of analyze part of spatial-multiplexing

* Bump rocm-docs-core from 1.9.0 to 1.12.0 in /docs/sphinx (#511)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.9.0 to 1.12.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](ROCm/rocm-docs-core@v1.9.0...v1.12.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* add exclusion for expired columns

* block sanity check and set correct dir for sysinfo.csv

* correct typo for fucntion merge_counters_spatial_multiplex and format the code

* add call for csv merging for multiplex and modify of condition check

* modify logic of create_df_pmc

* Update sample roofline plot img (#516)

* Modify path to use gpu_model instead of gpu_series to match other workload directory path creation/search points. Affects manual testing, does not seem to affect ctests. (#513)

Signed-off-by: Carrie Fallows <[email protected]>

* Improve formatting when displaying rocprof command. (#476)

* Improve formatting when displaying rocprof command.

Signed-off-by: xuchen-amd <[email protected]>

* Fix python formatting.

Signed-off-by: xuchen-amd <[email protected]>

* Strip unwanted characters (rocprofv1 specific) from rocprof commands.

Signed-off-by: xuchen-amd <[email protected]>

* Strip unwanted characters (rocprofv1 specific) from rocprof commands.

Signed-off-by: xuchen-amd <[email protected]>

* Save the unmodified arguments for rocprof for debug message display.

Signed-off-by: xuchen-amd <[email protected]>

---------

Signed-off-by: xuchen-amd <[email protected]>

* resolve issue related to multi-indexing(need to resolve metric calculation next step)

* quick fix for mpi_support (#518)

* Pass accumulate counters to rocprofv3 using -E option (#522)

rocprofv3 has a new -E option where extra counters can be passed (see accum_counters.yaml) instead
of defining them in counter_defs.yaml.

* Unify all file handling with pathlib (#512)

* Replace occurences of os.path functions with equivalent functions from
  pathlib library

* Remove unwanted imports of os.path and os

* Add coding guidelines for using pathlib instead of os.path

* Auto sync staging and mainline on a weekly cadence (#517)

Signed-off-by: coleramos425 <[email protected]>

* Docker environment for testing (#515)

* Add instructions on how to use Docker for manual and automatic testing

* Quick fix for mpi_support (#527)

* made change on the logic to add roofline file to expected csv list to fix crash on mi100 (#528)

* fixed the content issue in counter columns and make analyze mode work

* remove unnessaary log print

* Use PAT to bypass branch protection

* removed requirement for user to specify node info in argument of spatial_multiplexing and added comment on functions to merge dataframe

* add log for setup_workload_dir to check failure

* try to change way to know  if in spatial multiplexing mode to try to resolve failure of unit test

* remove unnecessary comment

* add todo index to indicate fuure removal of format conversion

---------

Signed-off-by: Daniel Su <[email protected]>
Signed-off-by: xuchen-amd <[email protected]>
Signed-off-by: Peter Park <[email protected]>
Signed-off-by: Carrie Fallows <[email protected]>
Signed-off-by: coleramos425 <[email protected]>
Signed-off-by: David Galiffi <[email protected]>
Signed-off-by: dependabot[bot] <[email protected]>
Signed-off-by: Carrie Fallows <[email protected]>
Signed-off-by: benrichard-amd <[email protected]>
Signed-off-by: Tim Gu <[email protected]>
Co-authored-by: Daniel Su <[email protected]>
Co-authored-by: xuchen-amd <[email protected]>
Co-authored-by: Peter Park <[email protected]>
Co-authored-by: cfallows-amd <[email protected]>
Co-authored-by: Cole Ramos <[email protected]>
Co-authored-by: David Galiffi <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ben Richard <[email protected]>
Co-authored-by: Tim Gu <[email protected]>
Co-authored-by: benrichard-amd <[email protected]>
Co-authored-by: YANG WANG <[email protected]>
Co-authored-by: Fei Zheng <[email protected]>
Co-authored-by: fei.zheng <[email protected]>
Co-authored-by: vedithal-amd <[email protected]>
  • Loading branch information
15 people authored Feb 13, 2025
1 parent 9643afa commit ac3b50f
Show file tree
Hide file tree
Showing 7 changed files with 202 additions and 17 deletions.
8 changes: 8 additions & 0 deletions src/argparser.py
Original file line number Diff line number Diff line change
Expand Up @@ -495,6 +495,14 @@ def omniarg_parser(
nargs="+",
help="\t\tSpecify GPU id(s) for filtering.",
)
analyze_group.add_argument(
"--spatial-multiplexing",
dest="spatial_multiplexing",
required=False,
default=False,
action="store_true",
help="\t\t\tMode of spatial multiplexing.",
)
analyze_group.add_argument(
"-o",
"--output",
Expand Down
13 changes: 12 additions & 1 deletion src/rocprof_compute_analyze/analysis_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
console_log,
demarcate,
is_workload_empty,
merge_counters_spatial_multiplex,
)


Expand All @@ -57,6 +58,10 @@ def set_soc(self, omni_socs):
def get_socs(self):
return self.__socs

@demarcate
def spatial_multiplex_merge_counters(self, df):
return merge_counters_spatial_multiplex(df)

@demarcate
def generate_configs(self, arch, config_dir, list_stats, filter_metrics, sys_info):
single_panel_config = file_io.is_single_panel_config(
Expand Down Expand Up @@ -142,6 +147,7 @@ def initalize_runs(self, normalization_filter=None):
sysinfo_path = (
Path(d[0])
if self.__args.nodes is None
and self.__args.spatial_multiplexing is not True
else file_io.find_1st_sub_dir(d[0])
)
sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv"))
Expand All @@ -166,6 +172,7 @@ def initalize_runs(self, normalization_filter=None):
sysinfo_path = (
Path(d[0])
if self.__args.nodes is None
and self.__args.spatial_multiplexing is not True
else file_io.find_1st_sub_dir(d[0])
)
w.sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv"))
Expand Down Expand Up @@ -199,7 +206,11 @@ def sanitize(self):
# validate profiling data

# Todo: more err check
if not (self.__args.nodes != None or self.__args.list_nodes):
if not (
self.__args.nodes != None
or self.__args.list_nodes
or self.__args.spatial_multiplexing
):
is_workload_empty(dir[0])
# else:

Expand Down
6 changes: 6 additions & 0 deletions src/rocprof_compute_analyze/analysis_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,16 @@ def pre_processing(self):
self._runs[d[0]].raw_pmc = file_io.create_df_pmc(
d[0],
self.get_args().nodes,
self.get_args().spatial_multiplexing,
self.get_args().kernel_verbose,
self.get_args().verbose,
)

if self.get_args().spatial_multiplexing:
self._runs[d[0]].raw_pmc = self.spatial_multiplex_merge_counters(
self._runs[d[0]].raw_pmc
)

file_io.create_df_kernel_top_stats(
df_in=self._runs[d[0]].raw_pmc,
raw_data_dir=d[0],
Expand Down
13 changes: 13 additions & 0 deletions src/rocprof_compute_analyze/analysis_webui.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,16 @@ def generate_from_filter(
base_data[base_run].raw_pmc = file_io.create_df_pmc(
self.dest_dir,
self.get_args().nodes,
self.get_args().spatial_multiplexing,
self.get_args().kernel_verbose,
self.get_args().verbose,
)

if self.get_args().spatial_multiplexing:
base_data[base_run].raw_pmc = self.spatial_multiplex_merge_counters(
base_data[base_run].raw_pmc
)

console_debug("analysis", "gui dispatch filter is %s" % disp_filt)
console_debug("analysis", "gui kernel filter is %s" % kernel_filter)
console_debug("analysis", "gui gpu filter is %s" % gcd_filter)
Expand Down Expand Up @@ -290,6 +297,12 @@ def pre_processing(self):
self.get_args().kernel_verbose,
args.verbose,
)

if self.get_args().spatial_multiplexing:
self._runs[self.dest_dir].raw_pmc = self.spatial_multiplex_merge_counters(
self._runs[self.dest_dir].raw_pmc
)

file_io.create_df_kernel_top_stats(
df_in=self._runs[self.dest_dir].raw_pmc,
raw_data_dir=self.dest_dir,
Expand Down
10 changes: 10 additions & 0 deletions src/rocprof_compute_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -219,6 +219,15 @@ def parse_args(self):
p.mkdir(parents=True, exist_ok=False)
except FileExistsError:
console_error("Directory already exists.")

elif self.__args.mode == "analyze":
# block all filters during spatial-multiplexing
if self.__args.spatial_multiplexing:
self.__args.gpu_id = None
self.__args.gpu_kernel = None
self.__args.gpu_dispatch_id = None
self.__args.nodes = None

return

@demarcate
Expand Down Expand Up @@ -342,6 +351,7 @@ def run_analysis(self):
sysinfo_path = (
Path(d[0])
if analyzer.get_args().nodes is None
and analyzer.get_args().spatial_multiplexing is not True
else file_io.find_1st_sub_dir(d[0])
)
sys_info = file_io.load_sys_info(sysinfo_path.joinpath("sysinfo.csv"))
Expand Down
47 changes: 31 additions & 16 deletions src/utils/file_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,9 @@ def create_df_kernel_top_stats(


@demarcate
def create_df_pmc(raw_data_root_dir, nodes, kernel_verbose, verbose):
def create_df_pmc(
raw_data_root_dir, nodes, spatial_multiplexing, kernel_verbose, verbose
):
"""
Load all raw pmc counters and join into one df.
"""
Expand Down Expand Up @@ -212,12 +214,7 @@ def create_single_df_pmc(raw_data_dir, node_name, kernel_verbose, verbose):
console_debug("pmc_raw_data final_single_df %s" % final_df.info)
return final_df

# regular single node case
if nodes is None:
return create_single_df_pmc(raw_data_root_dir, None, kernel_verbose, verbose)

# "empty list" means all nodes
elif not nodes:
if spatial_multiplexing:
df = pd.DataFrame()
# todo: more err check
for subdir in Path(raw_data_root_dir).iterdir():
Expand All @@ -230,15 +227,33 @@ def create_single_df_pmc(raw_data_dir, node_name, kernel_verbose, verbose):

# specified node list
else:
df = pd.DataFrame()
# todo: more err check
for subdir in nodes:
p = Path(raw_data_root_dir)
new_df = create_single_df_pmc(
p.joinpath(subdir), subdir, kernel_verbose, verbose
)
df = pd.concat([df, new_df])
return df
# regular single node case
if nodes is None:
return create_single_df_pmc(raw_data_root_dir, None, kernel_verbose, verbose)

# "empty list" means all nodes
elif not nodes:
df = pd.DataFrame()
# todo: more err check
for subdir in Path(raw_data_root_dir).iterdir():
if subdir.is_dir():
new_df = create_single_df_pmc(
subdir, str(subdir.name), kernel_verbose, verbose
)
df = pd.concat([df, new_df])
return df

# specified node list
else:
df = pd.DataFrame()
# todo: more err check
for subdir in nodes:
p = Path(raw_data_root_dir)
new_df = create_single_df_pmc(
p.joinpath(subdir), subdir, kernel_verbose, verbose
)
df = pd.concat([df, new_df])
return df


def collect_wave_occu_per_cu(in_dir, out_dir, numSE):
Expand Down
122 changes: 122 additions & 0 deletions src/utils/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1091,3 +1091,125 @@ def set_locale_encoding():
exit=False,
)
console_error(error)


def reverse_multi_index_df_pmc(final_df):
"""
Util function to decompose multi-index dataframe.
"""
# Check if the columns have more than one level
if len(final_df.columns.levels) < 2:
raise ValueError("Input DataFrame does not have a multi-index column.")

# Extract the first level of the MultiIndex columns (the file names)
coll_levels = final_df.columns.get_level_values(0).unique().tolist()

# Initialize the list of DataFrames
dfs = []

# Loop through each 'coll_level' and rebuild the DataFrames
for level in coll_levels:
# Select columns that belong to the current 'coll_level'
columns_for_level = final_df.xs(level, axis=1, level=0)

# Append the DataFrame for this level
dfs.append(columns_for_level)

# Return the list of DataFrames and the column levels
return dfs, coll_levels


def merge_counters_spatial_multiplex(df_multi_index):
"""
For spatial multiplexing, this merges counter values for the same kernel that runs on different devices. For time stamp, start time stamp will use median while for end time stamp, it will be equal to the summation between median start stamp and median delta time.
"""
non_counter_column_index = [
"Dispatch_ID",
"GPU_ID",
"Queue_ID",
"PID",
"TID",
"Grid_Size",
"Workgroup_Size",
"LDS_Per_Workgroup",
"Scratch_Per_Workitem",
"Arch_VGPR",
"Accum_VGPR",
"SGPR",
"Wave_Size",
"Kernel_Name",
"Start_Timestamp",
"End_Timestamp",
"Correlation_ID",
"Kernel_ID",
"Node",
]

expired_column_index = [
"Node",
"PID",
"TID",
"Queue_ID",
]

result_dfs = []

# TODO: will need optimize to avoid this convertion to single index format and do merge directly on multi-index dataframe
dfs, coll_levels = reverse_multi_index_df_pmc(df_multi_index)

for df in dfs:
kernel_name_column_name = "Kernel_Name"
if not "Kernel_Name" in df and "Name" in df:
kernel_name_column_name = "Name"

# Find the values in Kernel_Name that occur more than once
kernel_single_occurances = df[kernel_name_column_name].value_counts().index

# Define a list to store the merged rows
result_data = []

for kernel_name in kernel_single_occurances:
# Get all rows for the current kernel_name
group = df[df[kernel_name_column_name] == kernel_name]

# Create a dictionary to store the merged row for the current group
merged_row = {}

# Process non-counter columns
for col in [
col for col in non_counter_column_index if col not in expired_column_index
]:
if col == "Start_Timestamp":
# For Start_Timestamp, take the median
merged_row[col] = group["Start_Timestamp"].median()
elif col == "End_Timestamp":
# For End_Timestamp, calculate the median delta time
delta_time = group["End_Timestamp"] - group["Start_Timestamp"]
median_delta_time = delta_time.median()
merged_row[col] = merged_row["Start_Timestamp"] + median_delta_time
else:
# For other non-counter columns, take the first occurrence (0th row)
merged_row[col] = group.iloc[0][col]

# Process counter columns (assumed to be all columns not in non_counter_column_index)
counter_columns = [
col for col in group.columns if col not in non_counter_column_index
]
for counter_col in counter_columns:
# for counter columns, take the first non-none (or non-nan) value
current_valid_counter_group = group[group[counter_col].notna()]
first_valid_value = (
current_valid_counter_group.iloc[0][counter_col]
if len(current_valid_counter_group) > 0
else None
)
merged_row[counter_col] = first_valid_value

# Append the merged row to the result list
result_data.append(merged_row)

# Create a new DataFrame from the merged rows
result_dfs.append(pd.DataFrame(result_data))

final_df = pd.concat(result_dfs, keys=coll_levels, axis=1, copy=False)
return final_df

0 comments on commit ac3b50f

Please sign in to comment.