Skip to content

Commit 73caf4d

Browse files
committed
Update docs to version 22.12.0-edge
Signed-off-by: Paolo Di Tommaso <[email protected]>
1 parent 6e044e7 commit 73caf4d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+3277
-1648
lines changed

assets/docs/edge/.buildinfo

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: d8a475bdae6146a78a7e69d8a4a55a7b
3+
config: 44c7db84bcca4445fa0fd1670e4e5d03
44
tags: 645f666f9bcd5a90fca523b33c5a78b7

assets/docs/edge/_sources/aws.rst.txt

Lines changed: 17 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,12 @@ AWS CLI
152152
Nextflow requires to access the `AWS command line tool <https://aws.amazon.com/cli/>`_ (``aws``) from the container in
153153
which the job runs in order to stage the required input files and to copy back the resulting output files in the S3 storage.
154154

155+
.. tip::
156+
When using :ref:`wave-page` and :ref:`fusion-page` there is no need to configure the AWS command line tool in
157+
the task containers or the host Ec2 instances for the deployment of Nextflow pipeline with AWS Batch.
158+
You can ignore this requirement when using the Fusion file system feature. See the :ref:`fusion-page` documentation
159+
for further details.
160+
155161
The ``aws`` tool can be made available in the container in two ways:
156162

157163
1. installed in the Docker image(s) used during the pipeline execution,
@@ -299,6 +305,12 @@ AWS CLI installation
299305
.. warning:: The `AWS CLI tool <https://aws.amazon.com/cli>`_ must to be installed in your custom AMI
300306
by using a self-contained package manager such as `Conda <https://conda.io>`_.
301307

308+
.. tip::
309+
When using :ref:`wave-page` and :ref:`fusion-page` there is no need to configure the AWS command line tool in
310+
the task containers or the host Ec2 instances for the deployment of Nextflow pipeline with AWS Batch.
311+
You can ignore this requirement when using the Fusion file system feature. See the :ref:`fusion-page` documentation
312+
for further details.
313+
302314
The reason is that when the AWS CLI tool executes using Conda it will use the version of python supplied by Conda.
303315
If you don't use Conda and install the AWS CLI using something like `pip <https://pypi.org/project/pip/>`_ the ``aws``
304316
command will attempt to run using the version of python found in the running container which won't be able to find
@@ -335,8 +347,9 @@ Replace the path above with the one matching the location where ``aws`` tool is
335347
.. note:: Using a version of Nextflow prior 19.07.x the config setting `executor.awscli` should be used
336348
instead of `aws.batch.cliPath`.
337349

350+
338351
Docker installation
339-
---------------------------------------
352+
-------------------
340353
Docker is required by Nextflow to execute tasks on AWS Batch. `Amazon ECS-Optimized Amazon Linux 2 AMI` has Docker installed,
341354
however if you create your AMI starting from a different AMI that does not have Docker installed, you need to do it manually.
342355

@@ -392,7 +405,7 @@ of a specific job e.g. to define custom mount paths or other Batch Job special s
392405

393406
To do that first create a *Job Definition* in the AWS Console (or with other means). Note the name of the *Job Definition*
394407
you created. You can then associate a process execution with this *Job definition* by using the :ref:`process-container`
395-
directive and specifing, in place of the container image name, the Job definition name prefixed by the
408+
directive and specifying, in place of the container image name, the Job definition name prefixed by the
396409
``job-definition://`` string, as shown below::
397410

398411
process.container = 'job-definition://your-job-definition-name'
@@ -411,13 +424,13 @@ The pipeline execution must specifies a AWS Storage bucket where jobs intermedia
411424

412425
.. warning::
413426
The bucket path should include at least a top level directory name e.g. use ``s3://my-bucket/work``
414-
not just ``s3://my-bucket``.
427+
not just ``s3://my-bucket``.
415428

416429
Hybrid workloads
417430
----------------
418431

419432
Nextflow allows the use of multiple executors in the same workflow application. This feature enables the deployment
420-
of hybrid workloads in which some jobs are execute in the local computer or local computing cluster and
433+
of hybrid workloads in which some jobs are executed in the local computer or local computing cluster and
421434
some jobs are offloaded to AWS Batch service.
422435

423436
To enable this feature use one or more :ref:`config-process-selectors` in your Nextflow configuration file to apply

assets/docs/edge/_sources/azure.rst.txt

Lines changed: 42 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -308,13 +308,13 @@ Together, these settings determine the Operating System and version installed on
308308
By default, Nextflow creates CentOS 8-based pool nodes, but this behavior can be customised in the pool configuration.
309309
Below the configurations for image reference/SKU combinations to select two popular systems.
310310

311-
* Ubuntu 20.04::
311+
* Ubuntu 20.04 (default)::
312312

313313
azure.batch.pools.<name>.sku = "batch.node.ubuntu 20.04"
314314
azure.batch.pools.<name>.offer = "ubuntu-server-container"
315315
azure.batch.pools.<name>.publisher = "microsoft-azure-batch"
316316

317-
* CentOS 8 (default)::
317+
* CentOS 8::
318318

319319
azure.batch.pools.<name>.sku = "batch.node.centos 8"
320320
azure.batch.pools.<name>.offer = "centos-container"
@@ -345,6 +345,42 @@ Public images from other registries are still pulled (if requested by a Task) wh
345345
specified via the :ref:`container <process-container>` directive using the format: ``[server]/[your-organization]/[your-image]:[tag]``.
346346
Read more about image fully qualified image names in the `Docker documentation <https://docs.docker.com/engine/reference/commandline/pull/#pull-from-a-different-registry>`_.
347347

348+
Active Directory Authentication
349+
===============================
350+
351+
As of version ``22.11.0-edge``, `Service Principal <https://learn.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal>`_ credentials can optionally be used instead of Shared Keys for Azure Batch and Storage accounts.
352+
353+
The Service Principal should have the at least the following role assignments :
354+
355+
1. Contributor
356+
357+
2. Storage Blob Data Reader
358+
359+
3. Storage Blob Data Contributor
360+
361+
.. note::
362+
To assign the necessary roles to the Service Principal refer to the `official Azure documentation <https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current>`_.
363+
364+
The credentials for Service Principal can be specified as follows::
365+
366+
azure {
367+
activeDirectory {
368+
servicePrincipalId = '<YOUR SERVICE PRINCIPAL CLIENT ID>'
369+
servicePrincipalSecret = '<YOUR SERVICE PRINCIPAL CLIENT SECRET>'
370+
tenantId = '<YOUR TENANT ID>'
371+
}
372+
373+
storage {
374+
accountName = '<YOUR STORAGE ACCOUNT NAME>'
375+
}
376+
377+
batch {
378+
accountName = '<YOUR BATCH ACCOUNT NAME>'
379+
location = '<YOUR BATCH ACCOUNT LOCATION>'
380+
}
381+
}
382+
383+
348384
Advanced settings
349385
==================
350386

@@ -353,10 +389,13 @@ The following configuration options are available:
353389
============================================== =================
354390
Name Description
355391
============================================== =================
392+
azure.activeDirectory.servicePrincipalId The service principal client ID
393+
azure.activeDirectory.servicePrincipalSecret The service principal client secret
394+
azure.activeDirectory.tenantId The Azure tenant ID
356395
azure.storage.accountName The blob storage account name
357396
azure.storage.accountKey The blob storage account key
358397
azure.storage.sasToken The blob storage shared access signature token. This can be provided as an alternative to the ``accountKey`` setting.
359-
azure.storage.tokenDuration The duration of the shared access signature token created by Nextflow when the ``sasToken`` option is *not* specified (default: ``12h``).
398+
azure.storage.tokenDuration The duration of the shared access signature token created by Nextflow when the ``sasToken`` option is *not* specified (default: ``48h``).
360399
azure.batch.accountName The batch service account name.
361400
azure.batch.accountKey The batch service account key.
362401
azure.batch.endpoint The batch service endpoint e.g. ``https://nfbatch1.westeurope.batch.azure.com``.

assets/docs/edge/_sources/basic.rst.txt

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,7 @@ The following batch schedulers are supported:
9696
* `Univa grid engine <http://www.univa.com/>`_
9797
* `Platform LSF <http://www.ibm.com/systems/technicalcomputing/platformcomputing/products/lsf/>`_
9898
* `Linux SLURM <https://computing.llnl.gov/linux/slurm/>`_
99+
* `Flux Framework <https://flux-framework.org/>`_
99100
* `PBS Works <http://www.pbsworks.com/gridengine/>`_
100101
* `Torque <http://www.adaptivecomputing.com/products/open-source/torque/>`_
101102
* `HTCondor <https://research.cs.wisc.edu/htcondor/>`_
@@ -133,15 +134,15 @@ Read the :ref:`script-page` section to learn about the Nextflow scripting langua
133134
Configuration options
134135
---------------------
135136

136-
Pipeline configuration properties are defined in a file named ``nextflow.config`` in the pipeline execution directory.
137+
Pipeline configuration properties are defined in a file named ``nextflow.config`` in the pipeline execution directory.
137138

138-
This file can be used to define which executor to use, the process's environment variables, pipeline parameters etc.
139+
This file can be used to define which executor to use, the process's environment variables, pipeline parameters etc.
139140

140141
A basic configuration file might look like this::
141142

142-
process {
143+
process {
143144
executor='sge'
144-
queue = 'cn-el6'
145+
queue = 'cn-el6'
145146
}
146147

147148

assets/docs/edge/_sources/channel.rst.txt

Lines changed: 0 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -88,19 +88,6 @@ Channels may be created explicitly using the following channel factory methods.
8888
methods can be specified either as ``channel.of()`` or ``Channel.of()``, and so on.
8989

9090

91-
.. _channel-create:
92-
93-
create
94-
------
95-
96-
.. warning::
97-
The ``create`` method is no longer available in DSL2 syntax.
98-
99-
Creates a new channel, as shown below::
100-
101-
channelObj = Channel.create()
102-
103-
10491
.. _channel-empty:
10592

10693
empty
@@ -476,36 +463,6 @@ See also: `fromPath`_ factory method.
476463
Channel methods
477464
===============
478465

479-
.. _channel-bind1:
480-
481-
bind
482-
----
483-
484-
.. warning::
485-
The ``bind`` method is no longer available in DSL2 syntax.
486-
487-
Channel objects provide a `bind( )` method which is the basic operation to send a message over the channel.
488-
For example::
489-
490-
myChannel = Channel.create()
491-
myChannel.bind( 'Hello world' )
492-
493-
494-
.. _channel-bind2:
495-
496-
operator <<
497-
-----------
498-
499-
.. warning::
500-
The ``<<`` operator is no longer available in DSL2 syntax.
501-
502-
The operator ``<<`` is just a syntax sugar for the ``bind`` method. Thus, the following example produces
503-
an identical result as the previous one::
504-
505-
myChannel = Channel.create()
506-
myChannel << 'Hello world'
507-
508-
509466
.. _channel-subscribe:
510467

511468
subscribe

0 commit comments

Comments
 (0)