Skip to content

[ENH]bedpostx and probtrackx gpu and multithread support#3722

Open
mauriliogenovese wants to merge 3 commits intonipy:masterfrom
mauriliogenovese:fix-fsl-dti-parallel-and-gpu
Open

[ENH]bedpostx and probtrackx gpu and multithread support#3722
mauriliogenovese wants to merge 3 commits intonipy:masterfrom
mauriliogenovese:fix-fsl-dti-parallel-and-gpu

Conversation

@mauriliogenovese
Copy link
Copy Markdown
Contributor

This will enable the gpu version of probtrackx2 and add support to specify the number of thread used by bedpostx (by default it uses a thread per available core)

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 23, 2025

Codecov Report

❌ Patch coverage is 45.45455% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.88%. Comparing base (114c73d) to head (3bea7ef).
⚠️ Report is 31 commits behind head on master.

Files with missing lines Patch % Lines
nipype/interfaces/fsl/dti.py 45.45% 12 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3722      +/-   ##
==========================================
+ Coverage   72.85%   72.88%   +0.03%     
==========================================
  Files        1279     1279              
  Lines       59321    59387      +66     
==========================================
+ Hits        43219    43287      +68     
+ Misses      16102    16100       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@mauriliogenovese mauriliogenovese changed the title bedpostx and probtrackx gpu and multithread support [ENH]bedpostx and probtrackx gpu and multithread support Mar 23, 2025
@effigies effigies force-pushed the fix-fsl-dti-parallel-and-gpu branch from 8f89cdc to ebe842d Compare March 2, 2026 13:22
@effigies effigies force-pushed the fix-fsl-dti-parallel-and-gpu branch from ebe842d to 0c86c0b Compare March 2, 2026 13:23
Comment on lines +1091 to +1092
super().__init__(**inputs)
self.inputs.on_trait_change(self._cuda_update, "use_gpu")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, this will only trigger _cuda_update if use_gpu is set after the interface is initialized, and not respect the parameter in ProbTrackX2(..., use_gpu=True).

What about:

Suggested change
super().__init__(**inputs)
self.inputs.on_trait_change(self._cuda_update, "use_gpu")
super().__init__(**inputs)
self.inputs.on_trait_change(self._cuda_update, "use_gpu")
self._cuda_update()

Comment on lines 456 to +457
self.inputs.on_trait_change(self._cuda_update, "use_gpu")
self.inputs.on_trait_change(self._num_threads_update, "num_threads")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likewise, does this work if I pass use_gpu or num_threads to __init__()? Or should you call these unconditionally here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants