Query in robertsons code in predict_neuralode function #23
Replies: 18 comments
-
You are right. Although it's not a big deal here since p and u0 are correctly set in silver. By the way, for the PBE problem I don't think you need Robertson code, I suspect it is mainly the difficulty in inference nonlinear model parameters which can be improved by better designing experiments. Or alternatively, don't expect that the model parameters should be same as the ground truth since there is no ground truth... |
Beta Was this translation helpful? Give feedback.
-
Ohh no..i was trying this code again on another stiff system.. So happened to notice this robertson again.This stiff case I shall add today to same PBE repo in new folder.. Do take a look whenever you feel free..currently giving me headache becuase of For PBE, for now I am currently satisfied with good fit even though weights haven't been learnt. |
Beta Was this translation helpful? Give feedback.
-
Got it! For this Robertson code, you can pass the tspan argument into solver directly without remake the problem and it will be the same. What exactly the error is? |
Beta Was this translation helpful? Give feedback.
-
Its that |
Beta Was this translation helpful? Give feedback.
-
What's your ode solver? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
The solver looks good to me, although you could try pure stiff solver, like |
Beta Was this translation helpful? Give feedback.
-
I have just updated the code a little bit a2 132c3e8 and you can have look. One thing to note is that |
Beta Was this translation helpful? Give feedback.
-
Yes, I will definetly look at this.. |
Beta Was this translation helpful? Give feedback.
-
The hockin Mann problem looks interesting and I will also try it myself. function p2vec(p)
slope = abs(p[end])
w_b = @view(p[1:nr]) .* (10 * slope)
w_in = reshape(@view(p[nr * (ns + 1) + 1:nr * (2 * ns + 1)]), ns, nr)
w_out = reshape(@view(p[nr + 1:nr * (ns + 1)]), ns, nr)
w_out = @. -w_in * (10 ^ w_out)
w_in = clamp.(w_in, 0, 2.5)
return w_in, w_b, w_out
end |
Beta Was this translation helpful? Give feedback.
-
I feel |
Beta Was this translation helpful? Give feedback.
-
Yup.. I am tryring this.. |
Beta Was this translation helpful? Give feedback.
-
How would you advise to check the the training of species (all 34 of them) ? :update--resolved..: I better thought to |
Beta Was this translation helpful? Give feedback.
-
Here we try to clamp whole prediction to the upper bound. But for multiscale problems I think it makes more sense to have |
Beta Was this translation helpful? Give feedback.
-
Sounds like a good idea. I haven't really played with ub a lot. In general, for the non-stiff problems, ub is not so necessary. |
Beta Was this translation helpful? Give feedback.
-
Trying that trick of Thought of parallelizing..Did you meant to use |
Beta Was this translation helpful? Give feedback.
-
Yes, normally, we can not do that as we have to feed the optimizer one batch by one batch. I have an impression that some work is trying to do distributed training but I don't think it is widely accepted yet. |
Beta Was this translation helpful? Give feedback.
-
I think it's better to open in discussion section thread for |
Beta Was this translation helpful? Give feedback.
-
Hi @jiweiqi ,In robertson's code.. shouldn't one solve for
_prob
and notprob
.. copying here code for the reference fromhttps://github.com/DENG-MIT/CRNN/blob/main/robertson/rober_crnn.jl
Beta Was this translation helpful? Give feedback.
All reactions