-
Notifications
You must be signed in to change notification settings - Fork 35
Description
I encountered an issue with crochet not terminating the twisted deferred correctly when raising the TimeoutError exception.
Code for reference
def test_eventual_result_timeout(server_url):
# server incredibly slow
eventual_result = fido.fetch(server_url + '100sec')
# make sure no timeout error is thrown here but only on result retrieval
assert eventual_result.original_failure() is None
with pytest.raises(crochet.TimeoutError):
eventual_result.wait(timeout=1.0)
assert eventual_result.original_failure() is None
print("Sleeping a bit to make sure a potential exception in the reactor"
"thread due to connectionLost is trapped in eventual_result")
time.sleep(5)
assert eventual_result.original_failure() is None
print("Finished sleeping. in a moment we will lose references to"
"eventual_result and hence the deferred will be closed uncorrectly causing"
"the CRITICAL message to be logged right after this message....")
@crochet.run_in_reactor
def fetch_inner(url, method, headers, body, timeout, connect_timeout):
bodyProducer, headers = _build_body_producer(body, headers)
deferred = Agent(reactor, connect_timeout).request(
method=method,
uri=url,
headers=listify_headers(headers),
bodyProducer=bodyProducer)
def response_callback(response):
finished = Deferred()
response.deliverBody(HTTPBodyFetcher(response, finished))
return finished
deferred.addCallback(response_callback)
return deferred
def fetch(
url, method='GET', headers=None, body='', timeout=DEFAULT_TIMEOUT,
connect_timeout=DEFAULT_CONNECT_TIMEOUT,
content_type=DEFAULT_CONTENT_TYPE, user_agent=DEFAULT_USER_AGENT,
):
crochet.setup()
return fetch_inner(url, method, headers, body, timeout, connect_timeout)Traceback
Everything was going fine (TimeoutError is correctly raised) but the twisted reactor thread was failing due to the connection closing incorrectly. Traceback:
Sleeping a bit to make sure a potential exception in the reactor thread due to
connectionLost is trapped in eventual_result
Finished sleeping. in a moment we will lose references to eventual_result and hence
the deferred will be closed uncorrectly causing the CRITICAL message to
be logged right after this message....
CRITICAL:twisted:Unhandled error in EventualResult
Traceback (most recent call last):
Failure: twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
All assertion were true, showing that no exception is trapped in the EventualResult. Exception happens exactly when the eventual_result is garbage collected.
The problem
From Twisted documentation:
If a Deferred is garbage-collected with an unhandled error (i.e. it would call the next errback if there was one), then Twisted will write the error’s traceback to the log file. This means that you can typically get away with not adding errbacks and still get errors logged. Be careful though; if you keep a reference to the Deferred around, preventing it from being garbage-collected, then you may never see the error (and your callbacks will mysteriously seem to have never been called).
In this case the ConnectionLost exception is caused by the garbage collector disposing of the deferred when we lose all references to the EventualResult (and hence the deferred). In fact we don't see anything logged until the eventual_result is alive.
I think that raising TimeoutError here without disposing of the deferred is causing the Agent connection to be closed incorrectly and the exception to be thrown in the reactor thread after raising the TimeoutError.
Verification
As a last step to verify the problem I added self.cancel() before raising the TimeoutError so that the deferred is cancelled properly.
if not self._result_set.is_set():
self.cancel()
raise TimeoutError()I tested this with our internal acceptance test suite.
Why not handling twisted.internet.error.ConnectionLost with a twisted errback?
The problem is a systematic erroneous disposal of the deferred object causing the connection to be always incorrectly closed the same way. Let's say I was to handle this on our side this way:
def handle_timeout_errors(error):
if error.check(twisted.web.client.ResponseNeverReceived):
if error.value.reasons[0].check(
twisted.internet.error.ConnectionLost
):
# we get here if crochet aborts the connection uncorrectly
# when the EventualResult timeout is due. Nothing to raise
# because crochet will take care of raising a TimeoutError.
return
return error
deferred.addErrback(handle_timeout_errors)I would be mistakenly catching all potential connectionLost issues, even the ones due to other causes (for example the server aborting the connection).
Action points
I understand crochet would not want to dispose of deferred objects before raising the TimeoutError to allow multiple retries on result retrieval.
Q: Would crochet be interested in a way to dispose/destroy the deferred ? Something as this on stackoverflow. Otherwise I am conducting the same discussion to handle this on our side.