-
Notifications
You must be signed in to change notification settings - Fork 22
Add documentation on how to run and build NAMD on Eiger #86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
a747132
eiger namd
RMeli 0f19d9c
Update docs/software/sciapps/namd.md
RMeli 9738893
Update docs/software/sciapps/namd.md
RMeli 774bbba
Update docs/software/sciapps/namd.md
bcumming 665cc37
Update docs/software/sciapps/namd.md
RMeli 608cbda
Merge branch 'main' into namd-eiger
bcumming d8b746b
Merge branch 'main' into namd-eiger
RMeli a755f99
Ignore code blocks from spell checking (#103)
msimberg 67f3c86
ignore inline code
RMeli be973be
escape
RMeli 79cffbe
Update docs/software/sciapps/namd.md
RMeli bd006db
Update .github/workflows/spelling.yaml
msimberg File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -120,6 +120,7 @@ prgenv | |
proactively | ||
quickstart | ||
santis | ||
sbatch | ||
screenshot | ||
slurm | ||
smartphone | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# generic ignore spelling block | ||
<!--begin no spell check--> | ||
<!--end no spell check--> | ||
|
||
# ignore code blocks | ||
``` | ||
``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,7 +11,11 @@ jobs: | |
- uses: actions/checkout@v3 | ||
- name: Check spelling | ||
id: spelling | ||
uses: check-spelling/[email protected] | ||
# The given commit contains preliminary, unreleased, support for ignoring | ||
# whole blocks (multi-line) from spell checking. See | ||
# https://github.com/check-spelling/check-spelling/commit/46c981b7c96b3777aff4fd711fc9a8f126121b04 | ||
# for more details. | ||
uses: check-spelling/check-spelling@46c981b7c96b3777aff4fd711fc9a8f126121b04 | ||
with: | ||
check_file_names: 1 | ||
post_comment: 0 | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,6 +22,10 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac | |
!!! note "Prefer the single-node build and exploit GPU-resident mode" | ||
Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. | ||
|
||
!!! warning "Eiger" | ||
|
||
The multi-node version is the only version of NAMD available on [Eiger][ref-cluster-eiger] - single-node is not provided. | ||
|
||
## Single-node build | ||
|
||
The single-node build provides the following views: | ||
|
@@ -37,7 +41,7 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs: | |
#!/bin/bash | ||
#SBATCH --job-name="namd-example" | ||
#SBATCH --time=00:10:00 | ||
#SBATCH --account=<ACCOUNT> | ||
#SBATCH --account=<ACCOUNT> (6) | ||
#SBATCH --nodes=1 (1) | ||
#SBATCH --ntasks-per-node=1 (2) | ||
#SBATCH --cpus-per-task=288 | ||
|
@@ -46,19 +50,17 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs: | |
#SBATCH --view=namd-single-node (5) | ||
|
||
|
||
srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 <NAMD_CONFIG_FILE> | ||
srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 <NAMD_CONFIG_FILE> # (7)! | ||
``` | ||
|
||
1. You can only use one node with the `single-node` build | ||
2. You can only use one task per node with the `single-node` build | ||
3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) | ||
4. Load the NAMD UENV (UENV name or path to the UENV) | ||
4. Load the NAMD UENV (UENV name or path to the UENV). Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use | ||
5. Load the `namd-single-node` view | ||
|
||
* Change `<ACCOUNT>` to your project account | ||
* Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use | ||
* Change `<NAMD_CONFIG_FILE>` to the name (or path) of the NAMD configuration file for your simulation | ||
* Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation | ||
6. Change `<ACCOUNT>` to your project account | ||
7. Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation. | ||
|
||
Change `<NAMD_CONFIG_FILE>` to the name (or path) of the NAMD configuration file for your simulation | ||
|
||
??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" | ||
|
||
|
@@ -205,52 +207,137 @@ The multi-node build provides the following views: | |
!!! note "GPU-resident mode" | ||
The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node | ||
build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. | ||
|
||
|
||
### Running NAMD on Eiger | ||
|
||
The following sbatch script shows how to run NAMD on Eiger: | ||
|
||
|
||
```bash | ||
#!/bin/bash -l | ||
#SBATCH --job-name=namd-test | ||
#SBATCH --time=00:30:00 | ||
#SBATCH --nodes=4 | ||
#SBATCH --ntasks-per-core=1 | ||
#SBATCH --ntasks-per-node=128 | ||
#SBATCH --account=<ACCOUNT> (1) | ||
#SBATCH --hint=nomultithread | ||
#SBATCH --hint=exclusive | ||
#SBATCH --constraint=mc | ||
#SBATCH --uenv=namd/3.0:v1 (2) | ||
#SBATCH --view=namd (3) | ||
|
||
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK | ||
export OMP_PROC_BIND=spread | ||
export OMP_PLACES=threads | ||
|
||
srun --cpu-bind=cores namd3 +setcpuaffinity ++ppn 4 <NAMD_CONFIG_FILE> # (4)! | ||
``` | ||
|
||
1. Change `<ACCOUNT>` to your project account | ||
2. Load the NAMD UENV (UENV name or path to the UENV). Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use | ||
3. Load the `namd` view | ||
|
||
4. Make sure you set `++ppn`, and other NAMD options optimally for your calculation. | ||
msimberg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Change `<NAMD_CONFIG_FILE>` to the name (or path) of the NAMD configuration file for your simulation | ||
|
||
|
||
### Building NAMD from source with Charm++'s MPI backend | ||
|
||
!!! warning "TCL Version" | ||
According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded | ||
flags for TCL `8.5`. The UENV provides `[email protected]`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: | ||
change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. | ||
According to the NAMD 3.0 release notes, TCL `8.6` is required. | ||
However, the source code for some (beta) releases still contains hard-coded flags for TCL `8.5`. | ||
The UENV provides `[email protected]`, therefore you need to manually modify NAMD's `arch/Linux-<ARCH>.tcl` file: | ||
change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. | ||
|
||
|
||
The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: | ||
|
||
```bash | ||
export DEV_VIEW_NAME="develop" | ||
export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> | ||
=== "gh200 build" | ||
|
||
# Start uenv and load develop view | ||
uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> | ||
```bash | ||
export DEV_VIEW_NAME="develop" | ||
export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> # (1)! | ||
|
||
# Set variable VIEW_PATH to the view | ||
export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} | ||
# Start uenv and load develop view | ||
uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> # (2)! | ||
|
||
cd ${PATH_TO_NAMD_SOURCE} | ||
``` | ||
# Set variable VIEW_PATH to the view | ||
export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} | ||
|
||
!!! info "Action required" | ||
Modify the `<PATH_TO_NAMD_SOURCE>/arch/Linux-ARM64.tcl` file now. | ||
Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. | ||
cd ${PATH_TO_NAMD_SOURCE} | ||
``` | ||
|
||
```bash | ||
# Build bundled Charm++ | ||
tar -xvf charm-8.0.0.tar && cd charm-8.0.0 | ||
env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 | ||
|
||
# Configure NAMD build for GPU | ||
cd .. | ||
./config Linux-ARM64-g++.cuda \ | ||
--charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ | ||
--with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ | ||
--with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ | ||
--cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} | ||
cd Linux-ARM64-g++.cuda && make -j 32 | ||
|
||
# The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory | ||
``` | ||
1. Substitute `<PATH_TO_NAMD_SOURCE>` with the actual path to the NAMD source code | ||
2. Substitute `<NAMD_UENV>` with the actual name (or path) of the NAMD UENV you want to use. | ||
|
||
|
||
!!! info "Action required" | ||
Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-ARM64.tcl` file now. | ||
Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. | ||
|
||
|
||
|
||
Build [Charm++] bundled with NAMD: | ||
|
||
```bash | ||
tar -xvf charm-8.0.0.tar && cd charm-8.0.0 | ||
env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 | ||
``` | ||
|
||
Finally, you can configure and build NAMD (with GPU acceleration): | ||
|
||
```bash | ||
cd .. | ||
./config Linux-ARM64-g++.cuda \ | ||
--charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ | ||
--with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ | ||
--with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ | ||
--cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} | ||
cd Linux-ARM64-g++.cuda && make -j 32 | ||
``` | ||
|
||
The `namd3` executable (GPU-accelerated) will be built in the `Linux-ARM64-g++.cuda` directory. | ||
|
||
|
||
=== "zen2 build" | ||
|
||
```bash | ||
export DEV_VIEW_NAME="develop" | ||
export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> # (1)! | ||
|
||
# Start uenv and load develop view | ||
uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> # (2)! | ||
|
||
# Set variable VIEW_PATH to the view | ||
export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} | ||
|
||
cd ${PATH_TO_NAMD_SOURCE} | ||
``` | ||
|
||
1. Substitute `<PATH_TO_NAMD_SOURCE>` with the actual path to the NAMD source code | ||
2. Substitute `<NAMD_UENV>` with the actual name (or path) of the NAMD UENV you want to use. | ||
|
||
|
||
!!! info "Action required" | ||
Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-x86_64.tcl` file now. | ||
Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed. | ||
|
||
Build [Charm++] bundled with NAMD: | ||
|
||
```bash | ||
tar -xvf charm-8.0.0.tar && cd charm-8.0.0 | ||
env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 smp --with-production -j 32 | ||
``` | ||
|
||
Finally, you can configure and build NAMD: | ||
|
||
```bash | ||
cd .. | ||
./config Linux-x86_64-g++ \ | ||
--charm-arch mpi-linux-x86_64-smp --charm-base $PWD/charm-8.0.0 \ | ||
--with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ | ||
--with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} | ||
cd Linux-x86_64-g++ && make -j 32 | ||
``` | ||
|
||
* Change `<PATH_TO_NAMD_SOURCE>` to the path where you have the NAMD source code | ||
* Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use | ||
The `namd3` executable will be built in the `Linux-x86_64-g++` directory. | ||
|
||
To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: | ||
|
||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: perhaps this is safer:
? I'm not sure if it will otherwise exclude things in between inline code blocks as well, like
I think most inline code is just a single word without spaces?