Skip to content

Submit slurm cluster job(Sbatch) inside python and avoid shell script. Submission cmd can be customized to add more options.

License

Notifications You must be signed in to change notification settings

luptior/pysbatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

91 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pysbatch 0.1.6

PyPI - Python Version PyPI - Slurm Version

A Slurm command wrapper. Submit(sbatch) slurm cluster job inside python and avoid shell script for complicated pipeline jobs. For sbatch options, now only supports job name, memory size(in GBs), time limit(in days), dependency and ouput file. But you can use add_option parameter to add more.

Current release info

Name Downloads Version Platforms
Conda Recipe Conda Downloads Conda Version Conda Platforms

(via https://github.com/conda-forge/pysbatch-feedstock/blob/master/README.md)

install in linux/unix

# newer version takes time to publish on PyPI, so if you see any problem try downloading from github
git clone https://github.com/luptior/pysbatch.git
cd pysbatch
pip install .

# now published to PyPI, only support python version >= 3.5
pip install pysbatch

# now also availale with conda-forge distribution or my channel
conda install -c conda-forge pysbatch
conda install -c luptior pysbatch

customized settings

Recommended way to use the pysbatch, batch_setting object can be resued and avoid trouble

from pysbatch import *
x = batch_setting_new() # with default settings
x2 = batch_setting_new("--cpus-per-task=2 --job-name=lalaland") # with customized settings
x.edit_default("--cpus-per-task=2 --job-name=lalaland") # now replace the default instead of "edit"
x.add_options("--begin=16:00")

# dependency works in the same way
# edit dependency, de
x.add_dep(27561)
x.add_dep(27562)
x.reset_dep()

# the settings object can be reused
x.sbatch("python hello.py")

running sbatch() function in python

from pysbatch import *
sbatch(wrap="python hello.py") # simplest

jobid=sbatch(wrap="python hello.py").strip("\n").split(" ")[-1] # dependency example
sbatch(job_name="py_job", mem=16, dep="--dependency:afterok:{}".format(jobid), time=3-0, log="submit.out", wrap="python hello.py") # more options

sbatch(job_name="py_job", add_option="--cpus-per-task=1 --nodes=3", wrap="python hello.py") # add more options

limit total numbers in running/queued

useful if your slurm has a queue quota set and you need to submit a large batch of jobs

# example
for job in job_batch:
  sbatch(job)
  limit_jobs(limit=10000) # default is 200000

run_cmd()

# simplified subprocess.run() of running linux command in python

# in linux
$ ls ..
unrar
var
zlib-1.2.11
# in python
>>>print(run_cmd(['ls', '..']))
unrar
var
zlib-1.2.11

customized settings(in 0.1.2 version, still works)

(a little bit messy, already implemented it a better way)

from pysbatch import *

# initialize and edit batch_setting object
x = batch_setting() # start with default settings
x2 = batch_setting(mem="16") # change settings when initialize

# edit default options contain in this pakcage:
# --ntasks, --cpus-per-task, -N, --job-name, --mem, --time, --out
x.edit_cpus_per_task(8)
x.edit_mem("16") # one at a time
x.edit_default("--cpus-per-task=2 --job-name=lalaland") # all together

# edit dependency, de
x.add_dep(27561)
x.add_dep(27562)
x.reset_dep()

# add aditional
x.add_options("--begin=16:00")

x = batch_setting(empty_set=True)
x.add_options("--cpus-per-task=2 --job-name=lalaland ")

# the settings object can be reused
x.sbatch("python hello.py")

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

About

Submit slurm cluster job(Sbatch) inside python and avoid shell script. Submission cmd can be customized to add more options.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages