Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SCP recursive sub-directory upload issue #34

Open
jlutran opened this issue Jan 6, 2020 · 6 comments
Open

SCP recursive sub-directory upload issue #34

jlutran opened this issue Jan 6, 2020 · 6 comments

Comments

@jlutran
Copy link

jlutran commented Jan 6, 2020

When uploading sub-directories containing more than 1 file with scp, an extra \x00 is inserted in channel, causing error while parsing record in recv_inner().

How to reproduce :

--- a/sftpcloudfs/scp.py
+++ b/sftpcloudfs/scp.py
@@ -143,9 +143,15 @@ class SCPHandler(threading.Thread):
         if '\n' not in self.buffer:
             while True:
                 chunk = self.channel.recv(1024)
+                self.log.info('[recv_line] chunk=%s', chunk)
+                hex_chunk = "".join(x.encode('hex') for x in chunk)
+                self.log.info('[recv_line] hex chunk=%s', hex_chunk)
                 self.buffer += chunk
+                self.log.info('[recv_line] buffer=%s', self.buffer)
+                hex_buffer = "".join(x.encode('hex') for x in self.buffer)
+                self.log.info('[recv_line] hex buffer=%s', hex_buffer)
                 if '\n' in chunk:
                     break

Logs :

Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] chunk=D0755 0 in
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] hex chunk=4430373535203020696e0a
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] buffer=D0755 0 in
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] hex buffer=4430373535203020696e0a
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] chunk=D0755 0 sub1
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] hex chunk=4430373535203020737562310a
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] buffer=D0755 0 sub1
Jan  6 14:56:17 node-xxx paramiko[18892]: INFO: [recv_line] hex buffer=4430373535203020737562310a
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] chunk=C0644 65536 file2
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex chunk=43303634342036353533362066696c65320a
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] buffer=C0644 65536 file2
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex buffer=43303634342036353533362066696c65320a
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] chunk=
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex chunk=00
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] buffer=
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex buffer=00
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] chunk=C0644 32768 file1
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex chunk=43303634342033323736382066696c65310a
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] buffer=
Jan  6 14:56:18 node-xxx paramiko[18892]: INFO: [recv_line] hex buffer=0043303634342033323736382066696c65310a

My (dirty) workaround :

--- a/sftpcloudfs/scp.py
+++ b/sftpcloudfs/scp.py
@@ -143,6 +143,8 @@ class SCPHandler(threading.Thread):
         if '\n' not in self.buffer:
             while True:
                 chunk = self.channel.recv(1024)
+                if chunk == '\x00':
+                    continue
                 self.buffer += chunk
                 if '\n' in chunk:
                     break

Any hints on what's causing this behaviour ?

@reidrac
Copy link
Contributor

reidrac commented Jan 6, 2020

What was the SCP command?

The SCP support was contributed to the project and it is a "nice to have", but not really part of SFTP and is incomplete.

@jlutran
Copy link
Author

jlutran commented Jan 6, 2020

My upload command was : scp -r /tmp/tests/in user@sftpcloudfs-host:/dir

File tree :

~$ ls -R /tmp/tests/in/
/tmp/tests/in/:
sub1	sub2

/tmp/tests/in/sub1:
file1  file2

/tmp/tests/in/sub2:
file3  file4

@reidrac
Copy link
Contributor

reidrac commented Jan 7, 2020

Can you run the command in verbose mode? (-v)

There's a a commented "wait for ack" here: https://github.com/Memset/sftpcloudfs/blob/master/sftpcloudfs/scp.py#L213

That could be the 0 you see.

@jlutran
Copy link
Author

jlutran commented Jan 7, 2020

~$ scp -rv /tmp/tests/in host@sftpcloudfs-host:/dir
[...]
debug1: Sending command: scp -v -r -t /dir
Entering directory: D0755 0 in
Entering directory: D0755 0 sub1
Sending file modes: C0644 65536 file2
Sending file modes: C0644 32768 file1
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
scp: 30.0s timeout
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
debug1: fd 1 clearing O_NONBLOCK
Transferred: sent 67932, received 2280 bytes, in 58.9 seconds
Bytes per second: sent 1152.5, received 38.7
debug1: Exit status 1
lost connection

I tried to uncomment L213, but It raises an SCP internal Exception after the first file because ack value is not \x00 but the first character of the new record (C in this case) since this issue occurs only if there's more than 1 file inside a sub-directory.

I also noticed this issue is not 100% reproductible (the extra \x00 is not always present), that's why I wrote the workaround in recv_line()

@reidrac
Copy link
Contributor

reidrac commented Jan 7, 2020

Interesting! I'm not familiar with the protocol, I'll have a read to see what does that 0 mean.

@reidrac
Copy link
Contributor

reidrac commented Jan 17, 2020

Quick update! I've been looking at the protocol and I'm working on this, but I think I'm likely to tackle #27 and #28 first because I'm having lots of issues to run the project in Debian stable, and things can only get worse after Python 2 is no more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants