You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone, I am using EfficientNetB4 with Daniel's prepared data. While I am taking the course, the first thing I realized was that without fine-tuning we didn't fit the all data and after that I implemented that comparison in EfficientNetB4 therefore the results were little bit weird. In my model I used data augmentation with same in course and fine-tuning with 20 trainable layers (because there were around 400 trainable layers in B4 since we used 10 in course which is approximately 5% of all trainable layers in B0). As a result of these experiments I faced with that model which has non-augmented data fitting and non-fine-tuning (basically just B4) results are beating the fine-tuning and augmented data. So then I decreased non-frozen layers to 10 thus accuracy improved a little bit and then fitted the data without fine tuning and augmentation so basically all the results gave that without anything the EfficientNetB4 beats everything we did. So my question is that why is this happening ? Should I change something or is this normal to experience ?
In below model_2 represents fine-tuning with 10 layers and data augmentation.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone, I am using EfficientNetB4 with Daniel's prepared data. While I am taking the course, the first thing I realized was that without fine-tuning we didn't fit the all data and after that I implemented that comparison in EfficientNetB4 therefore the results were little bit weird. In my model I used data augmentation with same in course and fine-tuning with 20 trainable layers (because there were around 400 trainable layers in B4 since we used 10 in course which is approximately 5% of all trainable layers in B0). As a result of these experiments I faced with that model which has non-augmented data fitting and non-fine-tuning (basically just B4) results are beating the fine-tuning and augmented data. So then I decreased non-frozen layers to 10 thus accuracy improved a little bit and then fitted the data without fine tuning and augmentation so basically all the results gave that without anything the EfficientNetB4 beats everything we did. So my question is that why is this happening ? Should I change something or is this normal to experience ?
In below model_2 represents fine-tuning with 10 layers and data augmentation.
Beta Was this translation helpful? Give feedback.
All reactions