-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[UPDATE] Adaptation of trigger.sh for SLURM and SGE #302
Comments
Hi @braffes feel free to share you slurm config. Thank you! |
Hi, this is just a first attempt, and I haven't completely tested or optimized it for every type of job. However, it should still work. I think "clusterOptions" is not mandatory since the default QOS should be normal. Please note that I'm currently updating atlas/qsample, so the defaultab.config file might not be the latest version. Feel free to merge if any information is missing. I will try to come back with a more accurate/optimize configuration when I can. |
Thanks @braffes I'll test it next week and let you know |
🚀 [UPDATE] Unified Nextflow Configuration for SLURM and SGE✅ Subtasks
📌 SummaryThis issue describes the integration of both SLURM and SGE execution environments into a single 🔄 Main Changes
📂 Modified Files
🛠️ Steps to Apply the Changes1️⃣ Modify
|
All changes are done, tomorrow I'll test it within the branch
|
After testing I think that the config files are becoming too much invloved and should be rethinked for the sake of clarity and scalability. This could be the new structure:
The
And rethink it's old content:
Also rethink this |
Done until today:
Pending:
|
Feedback on SLURM Adaptation in the PipelineI would like to get your feedback regarding the SLURM adaptation I am working on. In our computing center, we are required to first launch a process using My question:Does your cluster work in the same way, or does it follow a different setup? I want to understand if you also need a script to launch the Nextflow process in your environment. The best solution for me would be if it works as I have implemented it now since that way I don’t have to handle specific cases. However, I am open to your feedback and would appreciate your input. I have added a new script for this purpose, which you can find in the test version of ATLAS: Looking forward to your feedback! Thanks, |
Hi @rolivella , In my current qsample setup, the nextflow command is started inside my VM, not in a slurm job. I don't think I am interested to use a job to handle the nextflow process, from my point on view, it will just waste 1 core... After saying that, I can do some test about your new version if needed. Best, |
Ok @braffes thanks for your feedback, it make sense to put an option to skip this nextflow process. I'll do it, but you'll have to test it because it's not allowed on my institutional cluster. |
Hi @braffes, I'm going to refocus on this issue to have it wrapped up by next week. Actually, as far as I know about SLURM I believe you do need a job script to launch the Nextflow process. It’s not like SGE in that sense. So, the way I’ve implemented it now (having Nextflow started via a SLURM job) should be fine. What’s your opinion on that? |
Hi @rolivella, I am not sure I understand your point. There are two possibilities:
I currently use the second option since I don't need a SLURM job to run the Nextflow main process and I don't want to waste 1 core on my HPC cluster for this job. I have never played with SGE, but after reading a bit of the documentation, it seems that the commands sbatch/qsub are doing the same kind of things, right? For both options, you have the choice to create a job for the main Nextflow process or not. I think it can be a good idea to have both possibilities. |
Hi @braffes Yes, you're right — the issue I have is that I cannot test the second option because my cluster only works as in option 1. The only solution I can think of is to make a change in the pipeline so that it supports option 2 as well, and then you could try it out once it's ready. I would push it to the atlas-test branch on GitHub. Would that work for you? @temaia do you know which mode use your slurm cluster? Or can I ask to someone in order to clarify this? |
Hi @rolivella The option 2 is the default one on your implementation since I only modify the nextflow file to use slurm executor and I don't change trigger.sh. That's ok to test the new implementation. |
@braffes ah, now I see. In this way I think that I finished the implemntation appart from some small loose-ends. |
🚀 [UPDATE] Adaptation of
trigger.sh
for SLURM and SGE📌 Summary
This issue documents the adaptation of
trigger.sh
to be compatible with both SLURM and SGE, maintaining a modular structure and usingsubmit_slurm.sh
for job submission in SLURM.🔄 Main Changes
trigger.sh
.launch_nf_run
so that:sbatch submit_slurm.sh
.nextflow run
.trigger.sh
; everything is managed through the.config
files.📂 Modified Files
trigger.sh
submit_slurm.sh
(renamed fromsubmit_nf.sh
for clarity)🛠️ Steps to Apply the Changes
1️⃣ Modify
trigger.sh
🔹 Add automatic SLURM/SGE detection
Add this block at the beginning of
trigger.sh
:🔹 Replace the
launch_nf_run
functionReplace the existing
launch_nf_run
function with this improved version:2️⃣ Update
submit_slurm.sh
(previouslysubmit_nf.sh
)Ensure
submit_slurm.sh
is properly structured to receive arguments and execute Nextflow correctly:✅ How to Test the Changes
trigger.sh
in a SLURM environment and verify thatsbatch
is used correctly:nextflow run
is executed directly:📌 Conclusion
This change simplifies pipeline management by using a single
trigger.sh
for both SLURM and SGE, maintaining modularity and leveragingsubmit_slurm.sh
for job submission in SLURM. 🚀🔥The text was updated successfully, but these errors were encountered: