Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Etags for all API reads. #3

Open
currentoor opened this issue May 3, 2016 · 9 comments
Open

Etags for all API reads. #3

currentoor opened this issue May 3, 2016 · 9 comments

Comments

@currentoor
Copy link
Contributor

I noticed all the js/css/img assets are taking advantage of etags and returning 304 when appropriate. Though I'm not sure where that is being applied?

However, it looks like etags are not used for om.next remote reads. A simple etag header based on the response to a remote read would probably be sufficient, and give a nice boost for large responses.

I can probably set this up in my app by injecting my own middleware. But I figured I'd offer to add it here if there was any interest?

@awkay
Copy link
Contributor

awkay commented May 9, 2016

How would you propose to do that (since the URL is always the same for remote reads)? I'd really need to hear more details before I could say if it sounds generally useful.

@currentoor
Copy link
Contributor Author

currentoor commented May 11, 2016

Using the techniques described here, we could get a one-to-one mapping of queries to urls like so

(let [query (parser {:state app-state} (om/get-query Dashboard) :remote)
      json  (t/write w query)
      hash  (.substring (sha-256 json) 0 16)]
    (str "/api/" hash))
;; "/api/02e397cc1447d688"

On the backend, I suggest nothing fancy.

  • Just process the whole request then in the outgoing part of some ring middleware we use SHA256 to construct an etag and attach it to the response.
  • Then (again in the middleware) we compare the newly generated etag with the one in the request headers.
  • Based on etag equality we send the whole response or a 304 back to the client.

This won't reduce load on the server or anything but it should speed up read requests that have significant network latency (large responses, crappy connection).

@awkay
Copy link
Contributor

awkay commented May 12, 2016

Ah, right. That was on the back burner, but I'm not sure it makes sense to have every request going through it. It would really depend on the application. As a general-purpose library does it make sense to have that on all the time, since it could generate significant server CPU load? Making it available on the server is simple enough as you say. I wonder if we could do something to let the client decide on using this facility so that you could trigger it from the client. Some ideas:

  • An option to turn it on/off completely on the client?
  • Metadata on the query? If the metadata marker is there, then send it to the cacheable API route?

@currentoor
Copy link
Contributor Author

What if we supported multiple remotes on the client? The default remote would just be current behavior and :etagged remote could append the url with a query hash. The server could then route and cache appropriately based on the client specified remote.

@awkay
Copy link
Contributor

awkay commented May 14, 2016

my initial thought is good...but I wonder if that isn't mixing the api. I'd
rather not overload the remote concept of om with a caching concern. I
think I'd rather see something that augments the ast.
On May 12, 2016 12:06 PM, "Karanbir Toor" [email protected] wrote:

What if we supported multiple remotes on the client? The default remote
would just be current behavior and :etagged remote could append the url
with a query hash. The server could then route and cache appropriately
based on the client specified remote.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#3 (comment)

@currentoor
Copy link
Contributor Author

Interesting, I didn't think of it that way at all. My understanding was multiple remotes were meant for doing caching stuff like this (as demonstrated in the om next wiki link I posted above).

That said I don't feel too strongly either way. If we go the AST route, how do recommend the client annotates the AST, since we don't write read functions in untangled-client?

On the backend I was thinking we have something like ring middleware but for the om next parser. Like so

(defn etag-middleware
  [api-fn]
  (fn [{:keys [ast] :as env} k params]
    (let [resp (api-fn env k params)]
      (if (with-etag? ast)
        (with-etag resp)
        resp))))

This is how I've been doing om next specific logging, authentication, etc.

@awkay
Copy link
Contributor

awkay commented May 16, 2016

We have not specified how to use multiple remotes yet, but I do plan on supporting that. So, by using your remote idea we'd have to do something like suffixes or other madness, which isn't great..OR we could also make this a feature of the networking layer, since that is pluggable on the client.

However; when I think about the fact that all loads are explicit in Untangled, the client can just add a parameter to the load-* calls.

Just feeling things out:

(load-data ... :cacheable true)

which would put something on the load marker, which the networking layer could then play with to generate the proper URL to the server.

@currentoor
Copy link
Contributor Author

That makes way more sense than what I had in mind. Thanks a lot for taking the time to explain!

:cacheable true on the client seems like a nice approach.

What do you think of the om parser middleware in untangled-server?

Does anything else need to be spec'd out or this an acceptable API for etag caching? Feel free to take your time.

@awkay
Copy link
Contributor

awkay commented May 16, 2016

Let's see...

I think the middleware sounds right. Security is the only thing that comes to mind...you'd be opening up a handler that accepts incoming requests on arbitrary paths (/api/some-hash). Can't think of any inherent problems...but you'll want to be careful.

Would be an awesome addition!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants