Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

where to specify amount of memory? #1

Open
WagnerPatrickResearch opened this issue May 16, 2024 · 4 comments
Open

where to specify amount of memory? #1

WagnerPatrickResearch opened this issue May 16, 2024 · 4 comments

Comments

@WagnerPatrickResearch
Copy link

Hey STQ Team! First of all, thank you for this great Package! If I can get it to work this will greatly help my work. I recently ran my first PDX Visium Experiment and was trying to run your Nextflow pipeline on a HPC (slurm submission). I adapted the node specification in the submit.sb script and the nextflow.config files. Unfortunately I ran into some more issues when trying to run the Demo. This is the execution error that I got. The memory specification is specified in the submit.sb

Error executing process > 'TWO:SEQ:LOAD_SAMPLE_INFO (Demo_S1)'

Caused by:
Failed to submit process to grid scheduler for execution

Command executed:

sbatch .command.run

Command exit status:
1

Command output:
sbatch: error: Mem flag not set for partial node allocation.
sbatch: error: Please specify the amount of memory for the job with: "--mem xxG".
sbatch: error: Batch job submission failed: Job size specification needs to be provided

Any Help will be greatly appreciated!
Thank you!
All the best,
Patrick

@WagnerPatrickResearch WagnerPatrickResearch changed the title where to specific amount of memory? where to specify amount of memory? May 17, 2024
@sdomanskyi
Copy link
Collaborator

Hi Patrick, thanks for reaching out! I'd need more info to understand your issue. Can you point out which lines you modified and what are the new values?
Meanwhile, the "--mem=12G" value in submit.sb doesn't need to be changed, the default value is universal. However, each process memory allocations might need adjustments.
Best wishes, Sergii

@WagnerPatrickResearch
Copy link
Author

Hi Sergii, thank you for your response! So I only modified the nodes to the actual names of the nodes on our cluster because that was the first error that I got. So my submit.sb slurm script looks like this now;

#SBATCH -p zen3_0512
#SBATCH -q zen3_0512
#SBATCH -t 02:00:00
#SBATCH --mem=2G
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=30
#SBATCH --signal=2
#SBATCH --no-requeue

for the nextflow.config file I also only edited node names (line 194,195):

profiles {
slurm {
executor {
name = "slurm"
submitRateLimit = '100/1s'
queueSize = 250
}
process.queue = "zen3_0512"
process.clusterOptions = '-q zen3_0512'
process.module = "slurm"
}
singularity {
process.module = 'singularity'

Now when I run the Demo I get the error I Send with my previous massage.

im just realising that for me the default Value of "--mem" was 2 and not 12 (as you wrote in your response), could that be the Problem? If not, where and how would I need to change each process memory allocation?

Sorry if these are really basic questions but it's my first time working in a computational environment like this.
Thank you for your time!
All the best,
Patrick

@sdomanskyi
Copy link
Collaborator

Hi Patrick,
The 2G vs 12G will not make any difference in demo case. This is for the "manager" process where the nextflow runs itself and decides what jobs/pipeline step to submit. If a pipeline run contains 100+ samples then it may run out of 2G. The flag -p is for partition and -q for qos on your cluster. One could run "sinfo" command to see what partitions are avaialble on your HPC and "sacctmgr show qos" for qos. So you probably don't need to specify specific node names.
Best wishes, Sergii

@WagnerPatrickResearch
Copy link
Author

Hi Sergii,
So I have edited the flags (-p; -q) to the right partition and qos on the Cluster.
When I now try to run the demo I get the memory allocation error again eventhough the --mem flag is defined in the submit.sb script.
Command output:
sbatch: error: Mem flag not set for partial node allocation.
sbatch: error: Please specify the amount of memory for the job with: "--mem xxG".
sbatch: error: Batch job submission failed: Job size specification needs to be provided

is there anywhere else in the scripts that I need to change the memory allocation to something specific?
You mentioned previously that "each process memory allocation might need adjustments", where would I adjust these and to what?
Thanks again for your time!
All the best,
Patrick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants