From a747132ea6eac96e4a989eb554b0abbb3ca01f81 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Fri, 11 Apr 2025 16:07:16 +0200 Subject: [PATCH 1/5] eiger namd --- docs/software/sciapps/namd.md | 169 +++++++++++++++++++++++++--------- 1 file changed, 128 insertions(+), 41 deletions(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 6ef718e7..6e182941 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -22,6 +22,10 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac !!! note "Prefer the single-node build and exploit GPU-resident mode" Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. +!!! warning "Eiger" + + The single-node build is not available on [Eiger][ref-eiger]. You need to use the multi-node build on [Eiger]. + ## Single-node build The single-node build provides the following views: @@ -37,7 +41,7 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs: #!/bin/bash #SBATCH --job-name="namd-example" #SBATCH --time=00:10:00 -#SBATCH --account= +#SBATCH --account= (6) #SBATCH --nodes=1 (1) #SBATCH --ntasks-per-node=1 (2) #SBATCH --cpus-per-task=288 @@ -46,19 +50,17 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs: #SBATCH --view=namd-single-node (5) -srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 +srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 # (7)! ``` 1. You can only use one node with the `single-node` build 2. You can only use one task per node with the `single-node` build 3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) -4. Load the NAMD UENV (UENV name or path to the UENV) +4. Load the NAMD UENV (UENV name or path to the UENV). Change `` to the name (or path) of the actual NAMD UENV you want to use 5. Load the `namd-single-node` view - -* Change `` to your project account -* Change `` to the name (or path) of the actual NAMD UENV you want to use -* Change `` to the name (or path) of the NAMD configuration file for your simulation -* Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation +6. Change `` to your project account +7. Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation. + Change `` to the name (or path) of the NAMD configuration file for your simulation ??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" @@ -205,52 +207,137 @@ The multi-node build provides the following views: !!! note "GPU-resident mode" The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. + + +### Running NAMD on Eiger + +The following sbatch script shows how to run NAMD on Eiger: + +```bash +#!/bin/bash -l +#SBATCH --job-name=namd-test +#SBATCH --time=00:30:00 +#SBATCH --nodes=4 +#SBATCH --ntasks-per-core=1 +#SBATCH --ntasks-per-node=128 +#SBATCH --account= (1) +#SBATCH --hint=nomultithread +#SBATCH --hint=exclusive +#SBATCH --constraint=mc +#SBATCH --uenv=namd/3.0:v1 (2) +#SBATCH --view=namd (3) + +export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK +export OMP_PROC_BIND=spread +export OMP_PLACES=threads + +srun --cpu-bind=cores namd3 +setcpuaffinity ++ppn 4 # (4)! +``` + +1. Change `` to your project account +2. Load the NAMD UENV (UENV name or path to the UENV). Change `` to the name (or path) of the actual NAMD UENV you want to use +3. Load the `namd` view +4. Make sure you set `++ppn`, and other NAMD options optimally for your calculation. + Change `` to the name (or path) of the NAMD configuration file for your simulation + ### Building NAMD from source with Charm++'s MPI backend !!! warning "TCL Version" - According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded - flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: - change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. + According to the NAMD 3.0 release notes, TCL `8.6` is required. + However, the source code for some (beta) releases still contains hard-coded flags for TCL `8.5`. + The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD's `arch/Linux-.tcl` file: + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: -```bash -export DEV_VIEW_NAME="develop" -export PATH_TO_NAMD_SOURCE= +=== "gh200 build" -# Start uenv and load develop view -uenv start --view=${DEV_VIEW_NAME} + ```bash + export DEV_VIEW_NAME="develop" + export PATH_TO_NAMD_SOURCE= # (1)! -# Set variable VIEW_PATH to the view -export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} # (2)! -cd ${PATH_TO_NAMD_SOURCE} -``` + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} -!!! info "Action required" - Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. + cd ${PATH_TO_NAMD_SOURCE} + ``` -```bash -# Build bundled Charm++ -tar -xvf charm-8.0.0.tar && cd charm-8.0.0 -env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 - -# Configure NAMD build for GPU -cd .. -./config Linux-ARM64-g++.cuda \ - --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ - --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ - --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ - --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} -cd Linux-ARM64-g++.cuda && make -j 32 - -# The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory -``` + 1. Substitute `` with the actual path to the NAMD source code + 2. Substitute `` with the actual name (or path) of the NAMD UENV you want to use. + + + !!! info "Action required" + Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. + + + Build [Charm++] boundled with NAMD: + + ```bash + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 + ``` + + Finally, you can configure and build NAMD (with GPU accereration): + + ```bash + cd .. + ./config Linux-ARM64-g++.cuda \ + --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++.cuda && make -j 32 + ``` + + The `namd3` executable (GPU-accelerated) will be built in the `Linux-ARM64-g++.cuda` directory. + +=== "zen2 build" + + ```bash + export DEV_VIEW_NAME="develop" + export PATH_TO_NAMD_SOURCE= # (1)! + + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} # (2)! + + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + + cd ${PATH_TO_NAMD_SOURCE} + ``` + + 1. Substitute `` with the actual path to the NAMD source code + 2. Substitute `` with the actual name (or path) of the NAMD UENV you want to use. + + + !!! info "Action required" + Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-x86_64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. + + Build [Charm++] boundled with NAMD: + + ```bash + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 smp --with-production -j 32 + ``` + + Finally, you can configure and build NAMD: + + ```bash + cd .. + ./config Linux-x86_64-g++ \ + --charm-arch mpi-linux-x86_64-smp --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} + cd Linux-x86_64-g++ && make -j 32 + ``` -* Change `` to the path where you have the NAMD source code -* Change `` to the name (or path) of the actual NAMD UENV you want to use + The `namd3` executable will be built in the `Linux-x86_64-g++` directory. To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: From 0f19d9cb3cd98b1b9274d4beb4b16908f7347b4f Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Mon, 14 Apr 2025 14:37:37 +0200 Subject: [PATCH 2/5] Update docs/software/sciapps/namd.md Co-authored-by: Mikael Simberg --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 6e182941..8713fee5 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -275,7 +275,7 @@ The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from so Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. - Build [Charm++] boundled with NAMD: + Build [Charm++] bundled with NAMD: ```bash tar -xvf charm-8.0.0.tar && cd charm-8.0.0 From 97388934a94f0e7020872fa16b01dee439d7fdfb Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Mon, 14 Apr 2025 14:38:17 +0200 Subject: [PATCH 3/5] Update docs/software/sciapps/namd.md --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 8713fee5..aed93552 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -319,7 +319,7 @@ The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from so Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-x86_64.tcl` file now. Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. - Build [Charm++] boundled with NAMD: + Build [Charm++] bundled with NAMD: ```bash tar -xvf charm-8.0.0.tar && cd charm-8.0.0 From 774bbba7f5acb649420d0ca628511f09a55763cf Mon Sep 17 00:00:00 2001 From: Ben Cumming Date: Wed, 16 Apr 2025 12:13:58 +0200 Subject: [PATCH 4/5] Update docs/software/sciapps/namd.md --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index aed93552..226bfba9 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -24,7 +24,7 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac !!! warning "Eiger" - The single-node build is not available on [Eiger][ref-eiger]. You need to use the multi-node build on [Eiger]. + The single-node build is not available on [Eiger][ref-cluster-eiger]. You need to use the multi-node build on [Eiger]. ## Single-node build From 665cc375b651b0a4bc09934cb65e1f821cd5b971 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 16 Apr 2025 13:11:18 +0200 Subject: [PATCH 5/5] Update docs/software/sciapps/namd.md Co-authored-by: Ben Cumming --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 226bfba9..3bf56edf 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -24,7 +24,7 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac !!! warning "Eiger" - The single-node build is not available on [Eiger][ref-cluster-eiger]. You need to use the multi-node build on [Eiger]. + The multi-node version is the only version of NAMD available on [Eiger][ref-cluster-eiger] - single-node is not provided. ## Single-node build