Skip to content

Tutorial for data transfer workflow for large datasets #235

@jnywong

Description

@jnywong

Context

Driven by the need for processing large bioscientific datasets for the Catalyst partner communities.

We propose data transfer workflow like the following:

a) users should stage their 'input' datasets in object storage buckets
b) if workflows support reading directly from object storage then use that else make a local copy from object storage to /tmp
c) use /tmp for any intermediate files created during a workflow pipeline
d) push 'output' data sets to object storage for persistence
e) strongly encourage community users to keep home directory storage to under 1GB per user
f) discourage use of shared expect for smaller datasets (100Gb total per community)

See 2i2c-org/infrastructure#4213

Proposal

Document the workflow as a tutorial to guide hub admins and end users through this recommended workflow.

Updates and actions

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions