You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dataprocessor.vocab variable only has 2 entries and and hence this will be the input to the model creation.
cnn = ABCNN(n_vocab=len(vocab), embed_dim=embed_dim, input_channel=input_channel,
output_channel=50, x1s_len=x1s_len, x2s_len=x2s_len, model_type=model_type, single_attention_mat=args.single_attention_mat) # ABCNNはoutput = 50固定らしいが.
model = Classifier(cnn, lossfun=sigmoid_cross_entropy,
accfun=binary_accuracy)
if args.glove:
cnn.load_glove_embeddings(args.glove_path, data_processor.vocab)
if args.word2vec:
cnn.load_word2vec_embeddings(args.word2vec_path, data_processor.vocab)
if args.gpu >= 0:
cuda.get_device(args.gpu).use()
model.to_gpu()
cnn.set_pad_embedding_to_zero(data_processor.vocab)
Sorry, I haven't finished reading the whole code but I wonder at this point if that is the intention of that variable or it should have contained all the vocab in the dataset?
Cheers,
Kurt
The text was updated successfully, but these errors were encountered:
I suspect that the two entries in vocab is "pad" and "unk".
It is because on line 69 and 70 in data_processor.py, the program looks up the dictionary to see if the vocab is contained in the word2vec's vocabulary (the pretrained model provided by Google).
This process is mainly for following what has been done by the author of the original ABCNN paper.
I think you'll get more entries in vocab if you remove the if statements on those lines.
Dear Butsugiri,
Thank you for sharing your code. I just have a clarification about
dataprocessor.vocab
variable. After running the following lines:dataprocessor.vocab
variable only has 2 entries and and hence this will be the input to the model creation.Sorry, I haven't finished reading the whole code but I wonder at this point if that is the intention of that variable or it should have contained all the vocab in the dataset?
Cheers,
Kurt
The text was updated successfully, but these errors were encountered: