|
| 1 | +.. _building-frontier: |
| 2 | + |
| 3 | +Frontier (OLCF) |
| 4 | +=============== |
| 5 | + |
| 6 | +The `Frontier cluster (see: Crusher) <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html>`_ is located at OLCF. |
| 7 | +Each node contains 4 AMD MI250X GPUs, each with 2 Graphics Compute Dies (GCDs) for a total of 8 GCDs per node. |
| 8 | +You can think of the 8 GCDs as 8 separate GPUs, each having 64 GB of high-bandwidth memory (HBM2E). |
| 9 | + |
| 10 | +If you are new to this system, please see the following resources: |
| 11 | + |
| 12 | +* `Crusher user guide <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html>`_ |
| 13 | +* Batch system: `Slurm <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html#running-jobs>`_ |
| 14 | +* `Production directories <https://docs.olcf.ornl.gov/data/storage_overview.html>`_: |
| 15 | + |
| 16 | + * ``$PROJWORK/$proj/``: shared with all members of a project (recommended) |
| 17 | + * ``$MEMBERWORK/$proj/``: single user (usually smaller quota) |
| 18 | + * ``$WORLDWORK/$proj/``: shared with all users |
| 19 | + * Note that the ``$HOME`` directory is mounted as read-only on compute nodes. |
| 20 | + That means you cannot run in your ``$HOME``. |
| 21 | + |
| 22 | + |
| 23 | +Installation |
| 24 | +------------ |
| 25 | + |
| 26 | +Use the following commands to download the WarpX source code and switch to the correct branch. |
| 27 | +**You have to do this on Summit/OLCF Home/etc. since Frontier cannot connect directly to the internet**: |
| 28 | + |
| 29 | +.. code-block:: bash |
| 30 | +
|
| 31 | + git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx |
| 32 | + git clone https://github.com/AMReX-Codes/amrex.git $HOME/src/amrex |
| 33 | + git clone https://github.com/ECP-WarpX/picsar.git $HOME/src/picsar |
| 34 | + git clone -b 0.14.5 https://github.com/openPMD/openPMD-api.git $HOME/src/openPMD-api |
| 35 | +
|
| 36 | +To enable HDF5, work-around the broken ``HDF5_VERSION`` variable (empty) in the Cray PE by commenting out the following lines in ``$HOME/src/openPMD-api/CMakeLists.txt``: |
| 37 | +https://github.com/openPMD/openPMD-api/blob/0.14.5/CMakeLists.txt#L216-L220 |
| 38 | + |
| 39 | +We use the following modules and environments on the system (``$HOME/frontier_warpx.profile``). |
| 40 | + |
| 41 | +.. literalinclude:: ../../../../Tools/machines/frontier-olcf/frontier_warpx.profile.example |
| 42 | + :language: bash |
| 43 | + :caption: You can copy this file from ``Tools/machines/frontier-olcf/frontier_warpx.profile.example``. |
| 44 | + |
| 45 | +We recommend to store the above lines in a file, such as ``$HOME/frontier_warpx.profile``, and load it into your shell after a login: |
| 46 | + |
| 47 | +.. code-block:: bash |
| 48 | +
|
| 49 | + source $HOME/frontier_warpx.profile |
| 50 | +
|
| 51 | +
|
| 52 | +Then, ``cd`` into the directory ``$HOME/src/warpx`` and use the following commands to compile: |
| 53 | + |
| 54 | +.. code-block:: bash |
| 55 | +
|
| 56 | + cd $HOME/src/warpx |
| 57 | + rm -rf build |
| 58 | +
|
| 59 | + cmake -S . -B build \ |
| 60 | + -DWarpX_COMPUTE=HIP \ |
| 61 | + -DWarpX_amrex_src=$HOME/src/amrex \ |
| 62 | + -DWarpX_picsar_src=$HOME/src/picsar \ |
| 63 | + -DWarpX_openpmd_src=$HOME/src/openPMD-api |
| 64 | + cmake --build build -j 32 |
| 65 | +
|
| 66 | +The general :ref:`cmake compile-time options <building-cmake>` apply as usual. |
| 67 | + |
| 68 | + |
| 69 | +.. _running-cpp-frontier: |
| 70 | + |
| 71 | +Running |
| 72 | +------- |
| 73 | + |
| 74 | +.. _running-cpp-frontier-MI100-GPUs: |
| 75 | + |
| 76 | +MI250X GPUs (2x64 GB) |
| 77 | +^^^^^^^^^^^^^^^^^^^^^ |
| 78 | + |
| 79 | +After requesting an interactive node with the ``getNode`` alias above, run a simulation like this, here using 8 MPI ranks and a single node: |
| 80 | + |
| 81 | +.. code-block:: bash |
| 82 | +
|
| 83 | + runNode ./warpx inputs |
| 84 | +
|
| 85 | +Or in non-interactive runs: |
| 86 | + |
| 87 | +.. literalinclude:: ../../../../Tools/machines/frontier-olcf/submit.sh |
| 88 | + :language: bash |
| 89 | + :caption: You can copy this file from ``Tools/machines/frontier-olcf/submit.sh``. |
| 90 | + |
| 91 | + |
| 92 | +.. _post-processing-frontier: |
| 93 | + |
| 94 | +Post-Processing |
| 95 | +--------------- |
| 96 | + |
| 97 | +For post-processing, most users use Python via OLCFs's `Jupyter service <https://jupyter.olcf.ornl.gov>`__ (`Docs <https://docs.olcf.ornl.gov/services_and_applications/jupyter/index.html>`__). |
| 98 | + |
| 99 | +Please follow the same guidance as for :ref:`OLCF Summit post-processing <post-processing-summit>`. |
| 100 | + |
| 101 | +.. _known-frontier-issues: |
| 102 | + |
| 103 | +Known System Issues |
| 104 | +------------------- |
| 105 | + |
| 106 | +.. warning:: |
| 107 | + |
| 108 | + May 16th, 2022 (OLCFHELP-6888): |
| 109 | + There is a caching bug in Libfrabric that causes WarpX simulations to occasionally hang on Crusher on more than 1 node. |
| 110 | + |
| 111 | + As a work-around, please export the following environment variable in your job scripts unti the issue is fixed: |
| 112 | + |
| 113 | + .. code-block:: bash |
| 114 | +
|
| 115 | + export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching |
0 commit comments