Picard iterations set up correctly? #32261
Replies: 2 comments 9 replies
-
|
Hello I believe the time steps are used to perform fixed point iterations (instead of nesting fixed point iterations in a time step) in Cardinal. |
Beta Was this translation helpful? Give feedback.
-
|
@megan-crocker have you tried using constant relaxation on the mesh tally? Oscillations between a stochastic solver and a deterministic solver are expected (especially if the coupling between physics is weak) as the fields aren't statistically converged. Relaxation (which averages the response from the stochastic solver over Picard iterations) dampen these cycles and forces convergence. See the "Other Features" section here for more information. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I am essentially trying to set up a tightly coupled neutronics-thermal simulation using Picard iterations, to iterate back and forth between OpenMC and MOOSE. Currently I have the scripts set up as below. I want the workflow to essentially perform an OpenMC solve, then send the heat source to MOOSE, send temp back to OpenMC and solve to convergence. I want to solve to steady state so I haven't included the time derivative (could this be the problem? the tutorials tend to use this).
I am getting an annoying issue where the solve is 'converging' within the first timestep. I have tried reducing the convergence tolerance and find that I just get oscillations and no actual convergence. This is my first time using this capability and I am just wondering if I have set everything up correctly? I have removed the large functions and blocks to make it easier to read.
Thank you :)
MOOSE.i
openmc.i
Beta Was this translation helpful? Give feedback.
All reactions