-
Notifications
You must be signed in to change notification settings - Fork 0
5). Troubleshooting
Input errors, for example missing FASTQ files, or incorrectly formatted input files, should get flagged at the start of the pipeline and return an error identifying the problem.
Errors in individual process will be flagged with an error, for example:
[13/bf74c7] NOTE: Process `NXF_LOMA:LOMA:ASSEMBLY:FLYE (SAMPLE_1)` terminated with an error exit status (130) -- Error is ignored
This would indicate metagenome assembly with Flye has failed. You can determine the exact fault by looking at the log and error files. They will be found within the work/ directory produced during the run, specifically:
work/13/bf74c7*/.command.err
work/13/bf74c7*/.command.log
work/13/bf74c7*/.command.out
Example error message:
N E X T F L O W ~ version 24.04.4
Launching `main.nf` [nostalgic_bell] DSL2 - revision: 1ad9e34c91
ERROR ~ Unable to acquire lock on session with ID dae0895a-e811-4779-b769-87eb7d2f530e
Common reasons for this error are:
- You are trying to resume the execution of an already running pipeline
- A previous execution was abruptly interrupted, leaving the session open
You can see which process is holding the lock file by using the following command:
- lsof /PATH/TO/LOCK
-- Check '.nextflow.log' file for details
Typically this occurs for one of two reasons:
1). The pipeline is already running elsewhere on the system.
- No fix required, Only one instance of the pipeline can be run at once.
2). A run was improperly exited (for example during a disconnection from the system.
- Can be checked with htop, either manually kill the remaining processes, wait for the processes to finish or delete the work/ directory.
Example error messages:
Command error:
INFO: Converting SIF file to temporary sandbox...
FATAL: while extracting /PATH/TO/*.img: root filesystem extraction failed: failed to copy content in staging file: write /tmp/rootfs-*/*: no space left on device
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory file: /tmp/hsperfdata_user/* Try using the -Djava.io.tmpdir= option to select an alternate temp location.
Error: Could not find or load main class ___.tmp.hsperfdata_user.*
In both cases this is due to insufficient storage space in the /tmp/ directory due to unsquashfs uncompress singularity images within /tmp/.
- Either don't use conda to install/run Nextflow/Singularity or attempt the fix here (Note: edit the conf/profile.config).
Certain Nanopore read sets base called with Dorado have caused errors with Porechop - reads split on internal adapters are not renamed correctly, leading to duplicate read names. This can be fixed by discarding reads with internal adapters, by including the following parameter in your command.
--PORECHOP_PORECHOP.args="--discard_middle"