Configuration
Nextflow runs on HPC execution schedulers (Slurm, SGE, and others) and cloud platforms (AWS Batch, Google Cloud, and others).
Nextflow also supports container engines (Docker, Singularity, and others) and dependency managers (Conda and Spack) for software deployment.
To run nf-core pipelines on your system, install your dependency management software (see Installation) and configure Nextflow.
Configuration controls how Nextflow runs (executor, resources, containers). Parameters control what the pipeline does with your data (--input, --outdir).
This section covers configuration. For pipeline-specific parameters, see the pipeline documentation.
Configuration options
You can configure pipelines using three approaches:
- Default pipeline configuration profiles
- Shared nf-core/configs configuration profiles
- Custom configuration files
Editing pipeline defaults prevents you from updating to newer versions without overwriting your changes. This breaks reproducibility and moves your workflow away from the canonical pipeline.
Choosing your configuration approach
Use default profiles (-profile docker,test) when:
- Running quick tests
- Using standard container engines
- Running on your local machine
Use shared nf-core/configs when:
- Working on a shared HPC cluster
- Your institution already has a profile
- Multiple users run pipelines on the same system
Use custom configuration files when:
- You need specific resource limits
- Running on unique infrastructure
- You are the only user of the pipeline
Default configuration profiles
Each nf-core pipeline includes default resource requirements in config/base.config.
The pipeline loads this base configuration first, then overwrites it with any subsequent configuration layers.
Enable configuration profiles using the -profile command line flag.
You can specify multiple profiles in a comma-separated list (e.g., -profile test,docker).
Order matters. Profiles load in sequence. Later profiles overwrite earlier ones.
nf-core provides these basic profiles for container engines:
docker: Uses Docker and pulls software from quay.ioapptainer: Uses Singularity and pulls software from quay.iopodman: Uses Podmanshifter: Uses Shiftercharliecloud: Uses Charliecloudconda: Uses Conda and pulls most software from Bioconda
Use Conda only as a last resort (that is, when you cannot run the pipeline with Docker or Singularity).
Without a specified profile, the pipeline runs locally and expects all software to be installed and available on the PATH.
This approach is not recommended.
Each pipeline includes test and test_full profiles.
These run the workflow with minimal or full-size public datasets for automated CI tests.
You can use them to test the pipeline on your infrastructure before running your own data.
Shared nf-core/configs
If you work on a shared system (for example, an HPC cluster or institutional server), use a configuration profile from nf-core/configs. All nf-core pipelines load these shared profiles at run time.
Check if your system has a profile at https://github.com/nf-core/configs. If not, follow the repository instructions or the tutorial to add your cluster.
Custom configuration files
If you run the pipeline alone, create a local configuration file. Nextflow searches for configuration files in three locations:
- User’s home directory:
~/.nextflow/config - Analysis working directory:
nextflow.config - Custom path on the command line:
-c path/to/config(you can specify multiple files)
Nextflow loads configuration parameters sequentially and overwrites previous values. The loading order is:
- Pipeline defaults
- User’s home directory
- Working directory
- Each
-cfile in the order you specify - Command line parameters (
--<parameter>)
Parameters in custom.config files will not override defaults in nextflow.config.
Use -params-file with YAML or JSON format instead.
Generate a parameters file using the Launch button on the nf-co.re website.
Additional resources
For more information about configuration syntax and parameters, see: