Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should handle an arbitrary steam of 1xx responses before the final response for a request. #12

Open
archaelus opened this issue Dec 4, 2013 · 17 comments

Comments

@archaelus
Copy link
Contributor

No description provided.

@omarkj
Copy link
Contributor

omarkj commented Dec 4, 2013

I don't understand this issue.

@ferd
Copy link
Contributor

ferd commented Dec 4, 2013

1xx codes can be used for temporary or incomplete responses. See for example:

102 Processing (WebDAV; RFC 2518). As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. This code indicates that the server has received and is processing the request, but no response is available yet.[3] This prevents the client from timing out and assuming the request was lost.

People could arguably decide to have a string of 1xx codes like "102: almost going" "103: progress 25%" then "105: okay this looks done" I guess. I'm not sure if we should handle them in a very fancy way. Code 100 at the very least.

@omarkj
Copy link
Contributor

omarkj commented Dec 4, 2013

Thanks for clarifying.

@omarkj
Copy link
Contributor

omarkj commented Jan 7, 2014

This one is solved as well, right?

@archaelus
Copy link
Contributor Author

Do we relay intermediate responses? Do we eat them? Do we crash?

@ferd
Copy link
Contributor

ferd commented Jan 7, 2014

We relay one, but error out if we get a second one, given 100-Continue cannot be followed by a non-terminal status code -- except for 101 Upgrade, which HTTPBis asks to support. Any other case we choke on.

@omarkj
Copy link
Contributor

omarkj commented Mar 24, 2014

Is this still an issue? Open for 3 months.

@ferd
Copy link
Contributor

ferd commented Mar 24, 2014

I'd vote to close this one as something we don't want to support, but that would require @archaelus's approval given he opened it.

@archaelus
Copy link
Contributor Author

We need to document this on devcenter. "Http intermediate responses: we don't support these except in the following cases: 100-Continue and 101-Switching Protocols"

(I also don't think we should implement it unless someone comes up with a compelling use case and we know that there's actually browser support).

@ferd
Copy link
Contributor

ferd commented Mar 26, 2014

WEBDAV is currently noted as no being supported in the new docs. Are there any other 1xx responses in there that people would expect using? Is it legit to just add as many as we want?

@archaelus
Copy link
Contributor Author

What bits of webdav don't we support? What bits of webdav could we easily support?

@ferd
Copy link
Contributor

ferd commented Mar 26, 2014

I think the rest of WEBDAV is implicitly supported as far as statuses go, but has some ambiguous cases: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

  • 102 Processing (WebDAV; RFC 2518)
  • 207 Multi-Status (WebDAV; RFC 4918) (following is multiple XML responses for sub-response codes)
  • 208 Already Reported (WebDAV; RFC 5842)
  • 422 Unprocessable Entity (WebDAV; RFC 4918)
  • 423 Locked (WebDAV; RFC 4918)
  • 424 Failed Dependency (WebDAV; RFC 4918)
  • 507 Insufficient Storage (WebDAV; RFC 4918)
  • 508 Loop Detected (WebDAV; RFC 5842

And methods (we take anything):

  • PROPFIND — used to retrieve properties, stored as XML, from a web resource. It is also overloaded to allow one to retrieve the collection structure (a.k.a. directory hierarchy) of a remote system.
  • PROPPATCH — used to change and delete multiple properties on a resource in a single atomic act
  • MKCOL — used to create collections (a.k.a. a directory)
  • COPY — used to copy a resource from one URI to another
  • MOVE — used to move a resource from one URI to another
  • LOCK — used to put a lock on a resource. WebDAV supports both shared and exclusive locks.
  • UNLOCK — used to remove a lock from a resource

There's also an HTTP 'If' header and dependencies on ETags, but I haven't looked at it in depth.

@archaelus
Copy link
Contributor Author

I think the 102: Processing status is the only way we'd break webdav through vegur. Presumably they issue that repeatedly until they're done. If we supported that, we'd also give Heroku customers a way to 'do long running jobs easily'. (While you generate the pdf, emit '102: Processing\r\n\r\n' every 25s)

@ferd
Copy link
Contributor

ferd commented Mar 26, 2014

I think it would be possible to make it work if we went to support it outside of the 100-continue (deep) workflow, which would be way too confusing otherwise. But if not, it's a question of looping on every Status, ...) when Status < 200 without relaying it.

If you really want that feature in, I guess it's workable. It just can't be used with a 100 Continue that had an expect: 100-continue, because any non-terminal status that follows 100 Continue in that context is an error and should be denied.

ferd added a commit that referenced this issue Mar 27, 2014
When a 100 continue expect is sent, a second one cannot be sent
legally according to the behavior we decided to specify.

Until we are ready to accept infinite streams of 1xx statuses (see
#12), this has to be handled
explicitly.
@ferd
Copy link
Contributor

ferd commented Mar 27, 2014

After re-reading the HTTP spec, there is nothing explicitly forbidding to send two consecutive 100-Continues -- the spec only mandates that a server must send a terminal status once it's done processing the request. I've updated docs to reflect this and commits referred above go in that direction.

@daguej
Copy link

daguej commented Oct 25, 2016

What is the current status of this?

I just uploaded an app to heroku:

var express = require('express'),
    app = express();

app.get('/long', function(req, res) {
    var ivl = setInterval(function() {
        res._writeRaw('HTTP/1.1 102 Processing\r\n\r\n');
    }, 10000);

    setTimeout(function() {
        clearInterval(ivl);
        res.send({ ok: true });
    }, 35000);
});

app.listen(process.env.PORT || 3003);

...and then:

$ curl -i http://h102.herokuapp.com/long
HTTP/1.1 102 Processing
Server: Cowboy
Date: Tue, 25 Oct 2016 21:04:28 GMT
Connection: close
Via: 1.1 vegur

HTTP/1.1 102 Processing

HTTP/1.1 102 Processing

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 11
ETag: W/"b-gjgNHiY7YJPzx1NWkPzddQ"
Date: Tue, 25 Oct 2016 21:04:53 GMT
Connection: close

{"ok":true}

...which is more or less what I'd expect to see. The 102s are relayed to me and eventually I see the 200. This works properly in Chrome, Firefox, and IE.

What is interesting is that the heroku router adds its headers to the original 102 and then appears to treat the rest of the response as HTTP body bytes. From the logs:

at=info method=GET path="/long" host=h102.herokuapp.com request_id=dfc11416-7f80-4c88-b76a-7887df232ee1 fwd="x.x.x.x" dyno=web.1 connect=1ms service=35044ms status=102 bytes=293

...oops.

This should be supported. The spec says:

A client MUST be able to parse one or more 1xx responses received prior to a final response, even if the client does not expect one. A user agent MAY ignore unexpected 1xx responses.

A proxy MUST forward 1xx responses unless the proxy itself requested the generation of the 1xx response. For example, if a proxy adds an "Expect: 100-continue" field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).

@ferd
Copy link
Contributor

ferd commented Oct 26, 2016

From the README:

The proxy will return a configurable error code if the server returns a 100 Continue following an initial 100 Continue response. The proxy does not yet support infinite 1xx streams.

And:

Not Supported [...] HTTP Extensions such as WEBDAV, relying on additional 1xx status responses

Unfortunately, the Vegur application has been written before RFC 7231, at a time where the spec had this to say:

Proxies MUST forward 1xx responses, unless the connection between the
proxy and its client has been closed, or unless the proxy itself
requested the generation of the 1xx response. (For example, if a
proxy adds a "Expect: 100-continue" field when it forwards a request,
then it need not forward the corresponding 100 (Continue)
response(s).)

Which was ambiguous and let us implement the current behaviour.

There is no time allocated for modifications enabling things such as WEBDAV support or updating the spec to RFC7231 within Heroku projects at the time to my knowledge, even though I would like to at this point.

The best I can offer at the time is to bring this to our product managers to see what can be done, or to hope for open-source contributions which we'd be happy to help guide and eventually deploy.

I can let you know what comes out of this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants