You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your great work. While reproducing your project, I encountered an issue that I hope you can provide some help with. I executed the following commands: python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26 python3 intervention_gptj_fever.py --lname fc_in --rate 9.0 --lnum 26 python3 intervention_gptj_fever.py --lname dont
However, I noticed that the LASER did not yield any performance improvement as expected. I've attached the log files for your reference. Could you kindly look into them and assist me in identifying any potential problems in my implementation? GPTJ-log-26-fc_in-9.0.txt GPTJ-log-24-dont-1.txt GPTJ-log-26-fc_in-9.9.txt
The text was updated successfully, but these errors were encountered:
Dear author, after tuning parameters, I discovered that potentially better performance might be achieved on my device, which is equipped with a GeForce RTX 4090, when selecting layer 24 and compressing at an 80% ratio (with the rate parameter set to 8.0). I speculate that such differences might stem from variations due to different hardware configurations. GPTJ-log-24-fc_in-8.0.txt
Right? The first one will run the base model and I can see that it recovers the 50.2% accuracy and 1.244 mean log-prob loss that matches the result in the paper (Table 1 here).
The next two will do single LASER intervention in layer 26 in the first-layer of MLP reducing rank down to 0.1 and 1.0 fraction of the maximum rank (which is minimum of the size of the two parameters). This rate thing is related to the ρ in the paper as ρ = 1 - 0.1 * rate.
Now the best hyperparameter for this setting in the paper are listed in Table 3 here and correspond to [Uin, 24, 0.01] where Uin is actually fc_in in the code, 24 is the layer number so lnum, and &rho = 0.01 meaning rate of 9.9. So I will recommend running
Can you try this setting for me? I noticed that in your second comment you eventually tried layer 24 but used a rate of 8.0 but that it still gave you improvements over the base model. Is my understanding correct?
Lastly, we have observed that some variations are possible due to inherent stochacticity of Pytorch svd call. Our experiments on one domain suggested the gap to be small but noticeable.
Thank you for sharing your great work. While reproducing your project, I encountered an issue that I hope you can provide some help with. I executed the following commands:
python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26 python3 intervention_gptj_fever.py --lname fc_in --rate 9.0 --lnum 26 python3 intervention_gptj_fever.py --lname dont
However, I noticed that the LASER did not yield any performance improvement as expected. I've attached the log files for your reference. Could you kindly look into them and assist me in identifying any potential problems in my implementation?
GPTJ-log-26-fc_in-9.0.txt
GPTJ-log-24-dont-1.txt
GPTJ-log-26-fc_in-9.9.txt
The text was updated successfully, but these errors were encountered: