You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, NESTML checks units and, when compiling the code, converts all units to the corresponding equivalent in the "NEST unit system".
I have two issues with the current state:
this means that all non-default units must be passed (e.g. through SetStatus) in the NEST unit system, which is very inconvenient if the model was not defined in this system
current conversion seems to be incorrect at least for some units (I was using ion concentrations, I'll try to provide an MWE later)
Therefore, besides trying to correct the current errors, I was wondering whether we should not try to provide proper access to units via PyNESTML in the simulation script, either:
wrapping the nest functions which deal with parameters (ugly)
or ensuring that conversion from this python unit to double (when it gets into NEST) will always be in the desired magnitude with respect to the "NEST unit system" (better?)
or with a totally different scheme I could not think of and which you can propose in the comments ;)
The text was updated successfully, but these errors were encountered:
Hi @Silmathoron, thanks for bringing this up, let me just make sure I understand things right. NEST has a set of predefined "neuroscience units", which are the implicit physical units associated with otherwise undecorated C++ floats/doubles. These are defined in pynestml/codegeneration/unit_converter.py. All values in NESTML are converted to these units during code generation. Is this near to what you are suggesting in the second bullet point?
When you say simulation script, do you mean e.g. PyNEST or PyNN? Passing unit types between PyNESTML and either of the above will be quite complex to implement.
@clinssen sorry for the late reply, I thought I responded before but maybe I forgot to press send.
This discussion is somewhat related to #398 so I'll also link it here.
The current situation is as follow:
suppose I have two variables A and B, A has a unit which is part of the ''neuroscience system'', say it is 5. Hz while B is a concentration (say I defined it at 2 umol/L).
Then:
when interacting with A in nest.Get/SetStatus, I'll get 0.005 because it will be converted to 1/ms so I'll always have to convert
on the other hand, because B is not part of the NS system, it's value will be 2. and no conversion is required
To my mind, this is both annoying and error-prone, so I would propose to decide once and for all that all non neuroscience units are SI, then overload the astropy and Quantity so that when the object is passed to NEST (i.e. converted to float), it is directly converted to the proper magnitude.
This would be done by implementing a custom __float__ method.
In the python simulation script, users would thus do:
Currently, NESTML checks units and, when compiling the code, converts all units to the corresponding equivalent in the "NEST unit system".
I have two issues with the current state:
SetStatus
) in the NEST unit system, which is very inconvenient if the model was not defined in this systemTherefore, besides trying to correct the current errors, I was wondering whether we should not try to provide proper access to units via PyNESTML in the simulation script, either:
The text was updated successfully, but these errors were encountered: