Skip to content

Latest commit

 

History

History
154 lines (113 loc) · 4.41 KB

01-JobScripts.org

File metadata and controls

154 lines (113 loc) · 4.41 KB

Submitting job scripts on LUMI

This file contains the following commands:

module load cray-python
source /project/project_465000321/qp2/quantum_package.rc   # source QP environment

When working with SLURM, you should use sbatch to submit your QP and CHAMP job scripts. For example, to submit a QP job script named job_qp.sh, you would use the command:

sbatch job_qp.sh

Example of a QP job script on LUMI

An example of a QP job script would be:

#!/bin/bash
#SBATCH --account=project_465000321
#SBATCH --reservation=enccs_training
#SBATCH --time=00:20:00
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=128
#SBATCH --mem=0
#SBATCH --partition=standard

source /project/project_465000321/environment.sh

qp_srun fci COH2.ezfio > COH2.fci.out

Note that QP uses shared memory parallelism with OpenMP. It will be efficient using all possible threads on the node.

Example of a CHAMP job script

For CHAMP, you should request one or many nodes with a certain number of processes (as there are a certain number of cores per node). There each MPI process uses a single thread.

An example of a CHAMP Variational Monte Carlo job script would be:

#!/bin/bash
#SBATCH --job-name=champ        # Job name
#SBATCH --output=champ.o%j      # Name of stdout output file
#SBATCH --error=champ.e%j       # Name of stderr error file
#SBATCH --partition=standard    # Partition (queue) name
#SBATCH --nodes=1               # Total number of nodes
#SBATCH --ntasks=128            # Total number of mpi tasks
#SBATCH --mem=0                 # Allocate all the memory on the node
#SBATCH --time=0:20:00          # Run time (d-hh:mm:ss)
#SBATCH --mail-type=all         # Send email at begin and end of job
#SBATCH --account=project_465000321  # Project for billing
#SBATCH --reservation=enccs_training  # Reservation for training

source /project/project_465000321/environment.sh

export PMI_NO_PREINITIALIZE=y

# CHANGE THE FILE NAME
INPUT=vmc_input.inp
OUTPUT=${INPUT%.inp}.out

# Check that the file exists
if [[ ! -f $INPUT ]] ; then
        echo Error: $INPUT does not exist. > $OUTPUT
        exit -1
fi

srun /project/project_465000321/champ/bin/vmc.mov1 -i $INPUT -o $OUTPUT -e error

An example of a CHAMP Diffusion Monte Carlo job script would be (along with the VMC calculation):

#!/bin/bash
#SBATCH --job-name=champ        # Job name
#SBATCH --output=champ.o%j      # Name of stdout output file
#SBATCH --error=champ.e%j       # Name of stderr error file
#SBATCH --partition=standard    # Partition (queue) name
#SBATCH --nodes=1               # Total number of nodes
#SBATCH --ntasks=128            # Total number of mpi tasks
#SBATCH --mem=0                 # Allocate all the memory on the node
#SBATCH --time=0:20:00          # Run time (d-hh:mm:ss)
#SBATCH --mail-type=all         # Send email at begin and end of job
#SBATCH --account=project_465000321  # Project for billing
#SBATCH --reservation=enccs_training # Reservation for training


source /project/project_465000321/environment.sh

set -e    # the script will exit if a command below fails

export PMI_NO_PREINITIALIZE=y

# CHANGE THE FILE NAME
VMCINPUT=vmc_input.inp
VMCOUTPUT=${VMCINPUT%.inp}.out

# CHANGE THE FILE NAME
DMCINPUT=dmc_input.inp
DMCOUTPUT=${DMCINPUT%.inp}.out

# Check that the files exist
if [[ ! -f $VMCINPUT ]] ; then
        echo Error: $VMCINPUT does not exist. > $VMCOUTPUT
        exit -1
fi

if [[ ! -f $DMCINPUT ]] ; then
        echo Error: $DMCINPUT does not exist. > $DMCOUTPUT
        exit -1
fi

# Launch MPI code

srun /project/project_465000321/champ/bin/vmc.mov1  -i $VMCINPUT -o $VMCOUTPUT -e error

cat mc_configs_new* >> mc_configs
rm mc_configs_new*

srun /project/project_465000321/champ/bin/dmc.mov1 -i $DMCINPUT -o $DMCOUTPUT -e error

rm problem* walkalize*
rm mc_configs_new*