You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, you'd use StoSims internally-built gnuplot or GnuR scripts, or you hook your own scripts into this loop. However, maybe you want to do something very different with the results. There should be a clean output method for these results, so one can pick them up as files and do whatever.
Actually, we should have a clean modular structure here, just as we have in the execution workflow. As in, you can have any code that does something with log data, we just keep control of where the data is coming from and where you can pick it up.
The workflow delivers one or two (or more?) sets of log data, on which some analysis can be done. Any module should just specify what data sets it should receive and how it should be called. This would maybe work like we call the executable, very much abstracted away from what it actually is, as long as it is able to read in the data sets, we're fine. The data sets are put somewhere as files and passed as filenames to the analysis module/script.
The things we have right now (gnuplotting, r-tests), should be made into pre-defined modules that plug into that workflow like any other module (not sure how yet, that is design work, also with regard to how a module can be paramterised).
The text was updated successfully, but these errors were encountered:
Currently, you'd use StoSims internally-built gnuplot or GnuR scripts, or you hook your own scripts into this loop. However, maybe you want to do something very different with the results. There should be a clean output method for these results, so one can pick them up as files and do whatever.
Actually, we should have a clean modular structure here, just as we have in the execution workflow. As in, you can have any code that does something with log data, we just keep control of where the data is coming from and where you can pick it up.
The workflow delivers one or two (or more?) sets of log data, on which some analysis can be done. Any module should just specify what data sets it should receive and how it should be called. This would maybe work like we call the executable, very much abstracted away from what it actually is, as long as it is able to read in the data sets, we're fine. The data sets are put somewhere as files and passed as filenames to the analysis module/script.
The things we have right now (gnuplotting, r-tests), should be made into pre-defined modules that plug into that workflow like any other module (not sure how yet, that is design work, also with regard to how a module can be paramterised).
The text was updated successfully, but these errors were encountered: