Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem Encountered During Reproduction #22

Open
ZY123-GOOD opened this issue Mar 19, 2024 · 2 comments
Open

Problem Encountered During Reproduction #22

ZY123-GOOD opened this issue Mar 19, 2024 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@ZY123-GOOD
Copy link

Thank you for sharing your great work. While reproducing your project, I encountered an issue that I hope you can provide some help with. I executed the following commands:
python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26 python3 intervention_gptj_fever.py --lname fc_in --rate 9.0 --lnum 26 python3 intervention_gptj_fever.py --lname dont
However, I noticed that the LASER did not yield any performance improvement as expected. I've attached the log files for your reference. Could you kindly look into them and assist me in identifying any potential problems in my implementation?
GPTJ-log-26-fc_in-9.0.txt
GPTJ-log-24-dont-1.txt
GPTJ-log-26-fc_in-9.9.txt

@ZY123-GOOD
Copy link
Author

Dear author, after tuning parameters, I discovered that potentially better performance might be achieved on my device, which is equipped with a GeForce RTX 4090, when selecting layer 24 and compressing at an 80% ratio (with the rate parameter set to 8.0). I speculate that such differences might stem from variations due to different hardware configurations.
GPTJ-log-24-fc_in-8.0.txt

@dkmisra
Copy link
Collaborator

dkmisra commented Mar 22, 2024

Hi @ZY123-GOOD. Apologies for the late response, I just noticed this issue.

If I understand correctly. You are running the following 3 experiments:

python3 intervention_gptj_fever.py --lname dont
python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26
python3 intervention_gptj_fever.py --lname fc_in --rate 9.0 --lnum 26

Right? The first one will run the base model and I can see that it recovers the 50.2% accuracy and 1.244 mean log-prob loss that matches the result in the paper (Table 1 here).

The next two will do single LASER intervention in layer 26 in the first-layer of MLP reducing rank down to 0.1 and 1.0 fraction of the maximum rank (which is minimum of the size of the two parameters). This rate thing is related to the ρ in the paper as ρ = 1 - 0.1 * rate.

Now the best hyperparameter for this setting in the paper are listed in Table 3 here and correspond to [Uin, 24, 0.01] where Uin is actually fc_in in the code, 24 is the layer number so lnum, and &rho = 0.01 meaning rate of 9.9. So I will recommend running

python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 24

Can you try this setting for me? I noticed that in your second comment you eventually tried layer 24 but used a rate of 8.0 but that it still gave you improvements over the base model. Is my understanding correct?

Lastly, we have observed that some variations are possible due to inherent stochacticity of Pytorch svd call. Our experiments on one domain suggested the gap to be small but noticeable.

@dkmisra dkmisra added the question Further information is requested label Mar 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants