-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add mscoco generative benchmark #63
Conversation
this looks quite simple, but it seems to me there should be captioning task and then make ms coco a dataset |
@rom1504 you are absolutely right, if it is ok I would be happier first using this just to run this specific evaluation on coca and once that one is done then add here more tasks or anyway making this more general |
can you rebase on main ? |
for idx, (img, _) in enumerate(tqdm(dataloader)): | ||
n_samples = img.shape[0] # for last batch | ||
idxs = [indexer[idx * batch_size + id] for id in range(n_samples)] | ||
out = model.generate(img.to(device), seq_len=30, generation_type="beam_search",num_beams=6, num_beam_groups=3, sot_token_id=49406, eos_token_id=49407) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is all this still needed @gpucce ?
…/CLIP_benchmark into add_generative_benchmarks
This PR is supposed to add testing for generative models (e.g. CoCa) mlfoundations/open_clip#308 it is far for complete but for initial testing it should work with
open_clip
The command to run it is something like
clip_benchmark eval --dataset=mscoco_captions --dataset_root=coco_data_root --task=mscoco_generative --model=coca_ViT-B-32 --output=result.json --batch_size=16 --pretrained=../pretrained_test_model.pt