-
-
Notifications
You must be signed in to change notification settings - Fork 32k
asyncio warnings and errors triggered by stressful code #109490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The warning is confusing because it doesn't clearly point to the issue. Here's a simpler snippet that will reproduce the messages you see: async def main():
await asyncio.create_subprocess_exec("sleep", "0.1", stdin=asyncio.subprocess.PIPE)
# subprocess is not awaited on nor terminated
asyncio.run(main())
time.sleep(0.5) This code, just like yours, is launching a process and then forgetting about it. In your case, it's the early return on timeout that causes this. You have to decide what to do with the process you have created. Python doesn't really have an option to forget about a process and let it run in background till the end, because it would be incorrect in most cases. So you might either wait for the process to exit (like in the example you copied from), or make it exit. For example by terminating or killing it, but you might have other options to request a clean shutdown. You probably saw a number of |
Indeed, there is a defect in my example code - thank you for pointing this out! This makes this bug report here invalid. But then ... ;) In my original code I never got
I have now tried implementing a robust way to clean up on a process taking "too much of a time to complete" and noticed a few things:
I have arrived at
for now, but may like (the non-existing) |
I think the bug report still has value because the The warning from ThreadedChildWatcher ("Loop ... that handles pid ... is closed") is also a bit dubious because it provides little actionable information to users. (Note I'm not a CPython maintainer. I'm just a contributor who happened to look into the same code paths recently) Note to myself or whoever is gonna investigate this: the child watcher holds a strong reference to the subprocess transport, so I think the destructor is never going to be called before the process has actually exited. Whatever the intention was with killing the process on
Right, sorry. These warnings are ignored by default and you can run with
(More accurately: before any reference to the child process.) It could be informative to mention it in the docs in some way. subprocess.Popen mentions that the destructor emits a ResourceWarning if the process is still running, so something along the same lines maybe.
I think this is a niche use case that probably does not warrant support from the high-level API. It can still be done by using lower level APIs, e.g. subprocess.Popen() directly, but also loop.subprocess_exec(), suppressing warnings (which, again, are not shown by default). Still, why would you do that, given that it's pretty cheap to just monitor these processes and ensure they are disposed of correctly?
I don't mind the idea in general. An async context manager would have the same limitation in case the event loop is closed before the context is exited. And wait() might be problematic in interaction with pipes (see the warning in the docs) and it's unclear to me how it should behave on cancellation. An other idea might be a (non-async) context manager that calls terminate()/kill(), but I don't know if it would be helpful in support of the common use cases.
Adding a timeout to the execution of a child process is not something that generalizes well, though, because you need knowledge of the program you are executing. Is it safe to kill? Does it need a specific signal? Does it have a way to trigger clean shutdown? How safe is it to kill it during shutdown? When is it safe to close the pipes, and other shared resources? Monitoring processes usually takes same application-specific code. |
Uh oh!
There was an error while loading. Please reload this page.
Bug report
Bug description:
Running the code below yields many lines of unwanted error output of the kind
and also runtime exceptions after all user code has completed of the kind
This apparently is caused by having
asyncio.wait_for
with aTIMEOUT_SECS
at an absurdly low value - which is that low to be able to reliably provoke the problem - IOW, this misbehaviour seems to depend on timeout errors being raised.Of particular note is the
time.sleep(2)
at the end which I use to delay process termination; this makes the problem visible on stderr. Note that the exit code of the Python process is always 0 (watch --errexit --interval=0 python debugger-bug.py
).The implementation is cobbled together from an example in the Python documentation (with fixes applied) and from StackOverflow for throttling best practices with a
Semaphore
.I am running this on Fedora Linux 38, Python 3.11.5, on an Intel 11800H CPU, lots of memory spare, within VMware Workstation.
CPython versions tested on:
3.11
Operating systems tested on:
Linux
The text was updated successfully, but these errors were encountered: