Skip to content

exec_run unexpected empty output when container is running on a remote host #3332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
antontornqvist opened this issue Apr 28, 2025 · 4 comments · May be fixed by #3333
Open

exec_run unexpected empty output when container is running on a remote host #3332

antontornqvist opened this issue Apr 28, 2025 · 4 comments · May be fixed by #3333

Comments

@antontornqvist
Copy link

antontornqvist commented Apr 28, 2025

Hello
I am observing incorrect output from exec_run when done on a running container on a remote host. The container is running a standard ubuntu image.
Connection to remote host is over an ssh forwarded unix socket.

Minimal reproducible example
Setup ssh forwarding:
ssh -L /tmp/docker-b.sock:/var/run/docker.sock -NTt -o ServerAliveInterval=30 -o ServerAliveCountMax=10 $DOCKER_HOST_B &

Connect to remote docker and execute command:

>>> import docker
>>> c = docker.DockerClient(base_url="unix:///tmp/docker-b.sock", version="auto", timeout=60, max_pool_size=8)
>>> c.containers.get("42270b507ba3").exec_run("echo hello")
ExecResult(exit_code=0, output=b'hello\n')
>>> c.containers.get("42270b507ba3").exec_run("echo hello")
ExecResult(exit_code=0, output=b'')
>>> c.containers.get("42270b507ba3").exec_run("echo hello")
ExecResult(exit_code=0, output=b'')
>>> c.containers.get("42270b507ba3").exec_run("echo hello")

Troubleshooting
I ran strace on the python process from where I ran exec_run and observed the following,
For unexpected output:

sendto(9, "POST /v1.43/exec/0ab9f918306ddcfdb18b1e36dce81b9ec6ebb2f03e4a26108f5128f73766c8c4/start HTTP/1.1\r\nHost: localhost\r\nUser-Agent: docker-sdk-python/7.1.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: Upgrade\r\nUpgrade: tcp\r\nContent-Type: application/json\r\nContent-Length: 31\r\n\r\n", 286, 0, NULL, 0) = 286
recvfrom(9, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.multiplexed-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\nApi-Version: 1.43\r\nDocker-Experimental: false\r\nOstype: linux\r\n", 8192, 0, NULL, NULL) = 177
recvfrom(9, "Server: Docker/24.0.5 (linux)\r\n\r\n\1\0\0\0\0\0\0\6hello\n", 8192, 0, NULL, NULL) = 47

For expected output:

sendto(10, "POST /v1.43/exec/832bad59596ae017de0428004e1e9073c2a0e5571ca1ac385df09a27763f6c67/start HTTP/1.1\r\nHost: localhost\r\nUser-Agent: docker-sdk-python/7.1.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: Upgrade\r\nUpgrade: tcp\r\nContent-Type: application/json\r\nContent-Length: 31\r\n\r\n", 286, 0, NULL, 0) = 286
recvfrom(10, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.multiplexed-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\nApi-Version: 1.43\r\nDocker-Experimental: false\r\nOstype: linux\r\nServer: Docker/24.0.5 (linux)\r\n\r\n", 8192, 0, NULL, NULL) = 210
recvfrom(10, "\1\0\0\0\0\0\0\6", 8, 0, NULL, NULL) = 8
recvfrom(10, "hello\n", 6, 0, NULL, NULL) = 6

So it looks like when the HTTP header is split into two recv calls, the issue is observed.

Full strace output:
docker-issue-strace.log

Additional info
Python version: Python 3.10.12
Docker python SDK version: 7.1.0
Container image: ubuntu:22.04
Docker version:

Client:
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.3
 Git commit:        24.0.5-0ubuntu1~22.04.1
 Built:             Mon Aug 21 19:50:14 2023
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.3
  Git commit:       24.0.5-0ubuntu1~22.04.1
  Built:            Mon Aug 21 19:50:14 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.24
  GitCommit:        
 runc:
  Version:          1.1.7-0ubuntu1~22.04.2
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        
@antontornqvist
Copy link
Author

antontornqvist commented Apr 28, 2025

After looking at this a bit more, it looks like the issue is that the code it trying to read data directly from the raw socket.
The problem here is that the HTTP library is using a buffered reader, and has already read the data from the socket and put it in its internal buffer.

So I suppose one potential solution here would be to stop reading directly from the socket, and use the buffered reader instead, although I have no idea of the impact of such a solution, I assume there must be some reason behind the current implementation of reading directly from the raw socket?

I also identified a workaround, putting a quick sleep in the command seems to circumvent the issue: exec_run(["bash", "-c", "sleep .3 && echo hello"])

>>> c.containers.get("42270b507ba3").exec_run("echo hello")
> .venv/lib/python3.10/site-packages/docker/api/client.py(434)_read_from_socket()
-> gen = frames_iter(socket, tty)
(Pdb) bt
  <stdin>(1)<module>()
  .venv/lib/python3.10/site-packages/docker/models/containers.py(213)exec_run()
-> exec_output = self.client.api.exec_start(
  .venv/lib/python3.10/site-packages/docker/utils/decorators.py(19)wrapped()
-> return f(self, resource_id, *args, **kwargs)
  .venv/lib/python3.10/site-packages/docker/api/exec_api.py(172)exec_start()
-> output = self._read_from_socket(res, stream, tty=tty, demux=demux)
> .venv/lib/python3.10/site-packages/docker/api/client.py(434)_read_from_socket()
-> gen = frames_iter(socket, tty)
(Pdb) list 420,450
420             self._disable_socket_timeout(socket)
421  
422             yield from response.iter_content(chunk_size, decode)
423  
424         def _read_from_socket(self, response, stream, tty=True, demux=False):
425             """Consume all data from the socket, close the response and return the
426             data. If stream=True, then a generator is returned instead and the
427             caller is responsible for closing the response.
428             """
429             socket = self._get_raw_response_socket(response)
430  
431             import pdb
432             pdb.set_trace()
433  
434  ->         gen = frames_iter(socket, tty)
435  
436             if demux:
437                 # The generator will output tuples (stdout, stderr)
438                 gen = (demux_adaptor(*frame) for frame in gen)
439             else:
440                 # The generator will output strings
441                 gen = (data for (_, data) in gen)
442  
443             if stream:
444                 return gen
445             else:
446                 try:
447                     # Wait for all frames, concatenate them, and return the result
448                     return consume_socket_output(gen, demux=demux)
449                 finally:
450                     response.close()
(Pdb) socket
<socket.SocketIO object at 0x7f05f2231780>
(Pdb) response.raw._fp.fp.raw
<socket.SocketIO object at 0x7f05f2231780>
(Pdb) response.raw._fp.fp
<_io.BufferedReader name=6>
(Pdb) response.raw._fp.fp.raw.read()
b''
(Pdb) response.raw._fp.fp.peek()
b'\x01\x00\x00\x00\x00\x00\x00\x06hello\n'

@kiview kiview changed the title exec_run unexpected empty output exec_run unexpected empty output when container is running on a remote host Apr 28, 2025
@kiview
Copy link
Member

kiview commented Apr 28, 2025

Thanks for raising and debugging @antontornqvist.

I assume there must be some reason behind the current implementation of reading directly from the raw socket?

Don't overestimate the current code, it might be entirely possible, that not all supported Docker environments were considered (and tested) in the implementation. Using a buffered reader sounds fine to me, do you want to look into providing a PR?

@antontornqvist
Copy link
Author

Sure, I can look into fixing this.

@antontornqvist antontornqvist linked a pull request May 9, 2025 that will close this issue
@antontornqvist
Copy link
Author

BTW #2042 looks like a duplicate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants