You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Configure Grafana Dashboard for Slurm](#configure-grafana-dashboard-for-slurm)
26
+
-[Using Terraform to Automate the Deployment of your OpenStack Instances](#using-terraform-to-automate-the-deployment-of-your-openstack-instances)
27
+
-[Using Ansisble to Automate the Configuration of your VMs](#using-ansisble-to-automate-the-configuration-of-your-vms)
28
+
-[Introduction to Continuous Integration](#introduction-to-continuous-integration)
29
+
-[GitHub](#github)
30
+
-[TravisCI](#travisci)
31
+
-[CircleCI](#circleci)
32
+
-[GROMACS Protein Visualisation](#gromacs-protein-visualisation)
33
+
-[Running Qiskit from a Remote Jupyter Notebook Server](#running-qiskit-from-a-remote-jupyter-notebook-server)
32
34
33
35
<!-- markdown-toc end -->
34
36
35
37
# Checklist
36
38
37
-
Tutorial 4 demonstrates environment module manipulation and the compilation and optimisation of HPC benchmark software. This introduces the reader to the concepts of environment management and workspace sanity, as well as compilation of software on Linux.
38
-
39
-
40
39
This tutorial demonstrates _cluster monitoring_ and _workload scheduling_. These two components are critical to a typical HPC environment. Monitoring is a widely used component in system administration (including enterprise datacentres and corporate networks). Monitoring allows administrators to be aware of what is happening on any system that is being monitored and is useful to proactively identify where any potential issues may be. A workload scheduler ensures that users' jobs are handled properly to fairly balance all scheduled jobs with the resources available at any time.
41
40
42
41
In this tutorial you will:
@@ -194,11 +193,11 @@ The Slurm Workload Manager (formerly known as Simple Linux Utility for Resource
194
193
195
194
1. Make sure the clocks, i.e. chrony daemons, are synchronized across the cluster.
196
195
197
-
2. Generate a SLURM and MUNGE user on all of your nodes:
196
+
2. Generate a **SLURM** and **MUNGE** user on all of your nodes:
198
197
199
-
- **If you have FreeIPA authentication working**
200
-
- Create the users using the FreeIPA web interface. **Do NOT add them to the sysadmin group**.
201
-
- **If you do NOT have FreeIPA authentication working**
198
+
- **If you have Ansible User Module working**
199
+
- Create the users as shown in tutorial 2 **Do NOT add them to the sysadmin group**.
200
+
- **If you do NOT have your Ansible User Module working**
202
201
- `useradd slurm`
203
202
- Ensure that users and groups (UIDs and GIDs) are synchronized across the cluster. Read up on the appropriate [/etc/shadow](https://linuxize.com/post/etc-shadow-file/) and [/etc/password](https://www.cyberciti.biz/faq/understanding-etcpasswd-file-format/) files.
204
203
@@ -213,10 +212,11 @@ The Slurm Workload Manager (formerly known as Simple Linux Utility for Resource
213
212
[...@headnode ~]$ sudo dnf install epel-release
214
213
```
215
214
216
-
Then we can install MUNGE, pulling the development source code from the `powertools` repository:
215
+
Then we can install MUNGE, pulling the development source code from the `crb` "CodeReady Builder" repository:
0 commit comments