You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NVFlare version (python3 -m pip list | grep "nvflare")
2.3.0
NVFlare branch (if running examples, please use the branch that corresponds to the NVFlare version, git branch)
2.3
Operating system
Ubuntu 22.04
Have you successfully run any of the following examples?
hello-numpy-sag with simulator
hello-pt with simulator
hello-numpy-sag with POC
hello-pt with POC
Please describe your question
I was testing some csv tabular datasets starting from the example "cifar10". I made 2 VMs with the same configuration and tool of the local server machine and I made 2 different experiments one without HE and one with HE.
In order to have a fair comparison I used the same "cifar10_learner" file and the same "cifar10_nets" file. The only thing I changed between the two experiments are the json configuration file and the provisioning yml file in order to be able to use HE.
All the experiments runs just fine, the problem is that the results I obtain from the two experiments are a little bit strange since HE is performing better than the experiment without HE. Is this possible or common? I mean, I expect HE to perform worst due to the overhead and cryptography under the hood, dunno why for all the three datasets I'm working with I obtain better results with HE.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Python version (
python3 -V
)3.10.12
NVFlare version (
python3 -m pip list | grep "nvflare"
)2.3.0
NVFlare branch (if running examples, please use the branch that corresponds to the NVFlare version,
git branch
)2.3
Operating system
Ubuntu 22.04
Have you successfully run any of the following examples?
Please describe your question
I was testing some csv tabular datasets starting from the example "cifar10". I made 2 VMs with the same configuration and tool of the local server machine and I made 2 different experiments one without HE and one with HE.
In order to have a fair comparison I used the same "cifar10_learner" file and the same "cifar10_nets" file. The only thing I changed between the two experiments are the json configuration file and the provisioning yml file in order to be able to use HE.
All the experiments runs just fine, the problem is that the results I obtain from the two experiments are a little bit strange since HE is performing better than the experiment without HE. Is this possible or common? I mean, I expect HE to perform worst due to the overhead and cryptography under the hood, dunno why for all the three datasets I'm working with I obtain better results with HE.
Can u share ur thoughts? Ty in advance!
Beta Was this translation helpful? Give feedback.
All reactions