You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Supposably, when I set the number of all available cores as -n 64 with the setup in my yaml file shown above, each job would occupy only 8 cores to perform bwa mem. However, when I checked the log files, both the debug-log and command log showed that the resources were not deployed as I wished. Besides, the pipeline repeatedly threw error indicating " Segmentation fault (core dumped) ", as is shown below.
I have no idea how this happened and what should I do to fix it , could you please help me with this problem? Thanks~
Log files (could be found in work/log)
debug-log
[2023-11-15T06:04Z] System YAML configuration: /home/data/bcbio/galaxy/bcbio_system.yaml.
[2023-11-15T06:04Z] Locale set to C.UTF-8.
[2023-11-15T06:04Z] Resource requests: bwa, sambamba, samtools; memory: 2.00, 6.00, 2.00; cores: 8, 32, 16
[2023-11-15T06:04Z] Configuring 1 jobs to run, using 32 cores each with 192.1g of memory reserved for each job
[2023-11-15T06:04Z] Timing: organize samples
[2023-11-15T06:04Z] multiprocessing: organize_samples
I suspect that here you have an indentation issue: you have 4 spaces instead of 2 after resources, and you specifications have not been parsed.
For a one-node non-distributed run, bcbio's logic in allocating resources with (-n 64) is
try to run all tools with 64 cores, if memory spec is allows for that
for example the default spec says 4G/ core, so 64 would need 64 x 4G = 256G RAM. If your server does not have this amount of RAM, bcbio is trying to decrease ncores, next would be 32 cores x 128G RAM
After these calculations, bcbio uses: 32 cores each with 192.1g
When bcbio runs a pipe, it accounts for the fact that every command in the pipe consumes RAM, so it has to decrease cores to fit into the RAM which happened in the command:
bwa mem -t 32 | bamsormadup threads=24
Still, these values are very high for this server. The memory is also consumed for the IO buffers.
You need to try running bcbio with -n 7 or -n10, maximum with -n20.
Large core numbers -n only make sense in a distributed bcbio runs, when these cores are requested across many servers.
Version info
bcbio_nextgen.py --version
):1.2.9lsb_release -ds
): Ubuntu 20.04.5 LTSTo Reproduce
Exact bcbio command you have used:
Your yaml configuration file:
Supposably, when I set the number of all available cores as
-n 64
with the setup in my yaml file shown above, each job would occupy only 8 cores to performbwa mem
. However, when I checked the log files, both the debug-log and command log showed that the resources were not deployed as I wished. Besides, the pipeline repeatedly threw error indicating " Segmentation fault (core dumped) ", as is shown below.I have no idea how this happened and what should I do to fix it , could you please help me with this problem? Thanks~
Log files (could be found in work/log)
debug-log
command-log
Segmentation fault error
The text was updated successfully, but these errors were encountered: