Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output the whole sequence within TF #15

Open
mingchen62 opened this issue Oct 20, 2017 · 9 comments
Open

output the whole sequence within TF #15

mingchen62 opened this issue Oct 20, 2017 · 9 comments

Comments

@mingchen62
Copy link

thanks much for the excellent code.

in the current predict, i saw it gets output idx one by one:
(Maybe it is for visualization purpose).
attention.py: line 86
for i in xrange(1,160):
inp_seqs[:,i] = sess.run(predictions,feed_dict={X:imgs,input_seqs:inp_seqs[:,:i]})

In my test, it takes quite a while on my GPU machine. My understanding is it goes back and forth between TF and Python.
Would it be more efficient to have one TF OP to output the whole sequence? I would be happy to work on it if there is some guidance?

@ritheshkumar95
Copy link
Owner

I already had this code. i forget to push it,

Check if it works now and let me know,

@mingchen62
Copy link
Author

mingchen62 commented Oct 21, 2017 via email

@moezlinlin
Copy link

when I run "python attention.py"
I met the same problem, could you tell me how to solve it?
thank you very much! ! !

Traceback (most recent call last):
File "attention.py", line 37, in
out,state = tflib.ops.FreeRunIm2LatexAttention('AttLSTM',emb_seqs,ctx,EMB_DIM,ENC_DIM,DEC_DIM,D,H,W)
File "/home/wll/im2latex/im2latex0123/tflib/ops.py", line 625, in FreeRunIm2LatexAttention
V = tf.transpose(ctx,[0,2,3,1]) # (B, H, W, D)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1336, in transpose
ret = gen_array_ops.transpose(a, perm, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 5694, in transpose
"Transpose", x=x, perm=perm, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimension must be 3 but is 4 for 'transpose' (op: 'Transpose') with input shapes: [?,?,80], [4].

@ritheshkumar95
Copy link
Owner

@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.

@wwjwhen
Copy link

wwjwhen commented Mar 12, 2018

I meet the same problem, and after going through the codes, I find that the def FreeRunIm2LatexAttention( name, ctx, input_dim, output_dim, ENC_DIM, DEC_DIM, D, H, W ): in tflib\ops.py accepts ctx after name but in attention.py, the function accepts emb_seqs rather than out,state = tflib.ops.FreeRunIm2LatexAttention('AttLSTM',emb_seqs,ctx,EMB_DIM,ENC_DIM,DEC_DIM,D,H,W), I think the problem has no relations with TF version, but a syntax error, yet I don not know how to fix it :(

@wwjwhen
Copy link

wwjwhen commented Mar 12, 2018

@moezlinlin @mingchen62 @ritheshkumar95 OK, I know how to solve the problem, to replace FreeRunIm2LatexAttention with im2latexAttention, it seems that the attention function is mis-used

@vuthithao
Copy link

@wwjwhen When I replace FreeRunIm2LatexAttention with im2latexAttention I have some wrong bellow:
InvalidArgumentError (see above for traceback): assertion failed: [Expected shape for Tensor rnn/sequence_length:0 is ] [20] [ but saw shape: ] [8]
[[node rnn/Assert/Assert (defined at /root/im2latex-tensorflow/tflib/ops.py:533) = Assert[T=[DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn/All/_99, rnn/Assert/Assert/data_0, rnn/stack/_101, rnn/Assert/Assert/data_2, rnn/Shape_1/_103)]]
[[{{node rnn/while/PyFunc/_278}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_2593_rnn/while/PyFunc", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

Can you help me, please?

@arieshx
Copy link

arieshx commented Jan 19, 2019

@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.

You this sucker!!!!!

@ritheshkumar95
Copy link
Owner

@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.

You this sucker!!!!!

Reporting and blocking this user for bad language.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants