-
Notifications
You must be signed in to change notification settings - Fork 781
AWS Batch: Add support for aws.batch.jobDefinition.schedulingPriority #6998
Description
New feature
AWS Batch supports specifying container properties at the Job Definition grain. Nextflow should too.
schedulingPriority
The scheduling priority for jobs that are submitted with this job definition. This only affects jobs in job queues with a fair-share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.
The minimum supported value is 0 and the maximum supported value is 9999.
Type: Integer
Required: No
Use cases
1. Per-process-type prioritization
Different Nextflow processes have different pipeline criticality. For example, in a bioinformatics pipeline:
ALIGN(blocks everything downstream) → high priority (e.g., 900)QC_REPORT(non-blocking, cosmetic) → low priority (e.g., 100)
Without this support, all jobs get the same default priority regardless of their role.
2. Declarative config, not manual submission-time logic
The priority is baked into the job definition, so you don't have to set it on every submitJob call. In Nextflow terms, you'd express it as a process directive:
process ALIGN {
queue 'fair-share-queue'
schedulingPriority 900
...
}3. Critical-path optimization in shared compute
In multi-tenant or multi-pipeline environments using fair-share queues, high-priority processes (e.g., those gating downstream tasks) can be promoted over lower-priority background work across all pipelines competing for the same queue — not just within your own pipeline.
This feature lets you express scheduling intent declaratively at the process level, which is the natural abstraction boundary in Nextflow, rather than relying on queue-level bluntness or manual submission-time overrides.
Suggested implementation
None.