Exploring how Server Push interacts with HTTP caching below. Feedback welcome.
# Pushing Uncacheable Content
RFC7540, Section 8.2 says:
> Pushed responses that are not cacheable MUST NOT be stored by any HTTP cache. They MAY be made available to the application separately.
As a result, any pushed response that cannot be stored as per the rules in RFC7234, Section 3 <http://httpwg.org/specs/rfc7234.html#response.cacheability> cannot be stored by a receiving HTTP cache.
"Being made available to the application separately" could mean many things. It could be that a truly uncacheable response (e.g., with `Cache-Control: no-store`) would bypass the HTTP cache but then be stored by the application in anticipation of a future request, but this might lead to some surprising results for Web developers, because it's effectively specifying yet another kind of browser caching (see separate thread).
However, they might still be usable if a browser API for Server Push emerges. See <https://github.com/whatwg/fetch/issues/51>.
Do we think that an uncacheable response should have any side effects on a client other than such a dedicated push-focused API (e.g., triggering an event)? See also thread on "Scope of Server Push."
# Pushing and Freshness
RFC7540, Section 8.2 says:
> Pushed responses are considered successfully validated on the origin server (e.g., if the "no-cache" cache response directive is present (RFC7234, Section 5.2.2)) while the stream identified by the promised stream ID is still open.
This implies that, while that stream is open, the pushed response can be used by the cache, even when it contains any (or all) of the following cache directives:
* s-maxage=0 (for shared caches)
The underlying principle here is that while the response stream is still open, it's semantically equivalent to a "normal" response to a just-issued request; it would be senseless to require it to be immediately revalidated before handing it to the application for use.
The cache can also store the response, but once the stream is closed, if that response is stale -- either because of the presence of one of the directives above, or some combination of `Expires`, `Age`, `Date`, and `Cache-Control`, it will need to be revalidated before use.
Interestingly, cache freshness (as defined in RFC7234, Section 4.2) is also related to the concept of "successful validation on the origin server":
> A response's age is the time that has passed since it was generated by, or successfully validated with, the origin server.
We could decide to read this to assume that the pushed response has `Age: 0` no matter what comes across the wire.
So, in summary -- any pushed response can be passed on to the application through a Server Push API. If it's cacheable, it can be stored in the cache, and even if it's stale or it requires validation, it can be served from cache while the response stream is still open (with some wiggle room).
Note that HTTP does not put constraints on _how_ the application uses that response after it comes through the API or the cache; it might use it multiple times (e.g., an image might occur more than once on a page, or more than one downstream client might have made the request). It's just that this reuse isn't in the context of a HTTP cache's operation.
Does this agree with other people's understanding?
# Pushing and Invalidation
When a server wants to remove the contents of a client's cache for a given URL, but doesn't know what it's to be replaced with yet, it needs to invalidate. The only native HTTP mechanism for cache invalidation is described in RFC7234, Section 4.4:
> A cache MUST invalidate the effective Request URI (Section 5.5 of RFC7230) as well as the URI(s) in the Location and Content-Location response header fields (if present) when a non-error status code is received in response to an unsafe request method.
Since it is triggered by unsafe request methods (like POST), this can't be used in Server Push.
We _could_ use this loophole a bit further down:
> A cache MUST invalidate the effective request URI (Section 5.5 of RFC7230) when it receives a non-error response to a request with a method whose safety is unknown.
... by defining a method that is defined to have a method whose safety is unknown (since if it's defined, it either won't be pushable, or won't trigger invalidation). E.g.
However, doing that might cause problems with IANA, since we'd have to pick a safety value to register.
Another approach would be to push a 404 (Not Found) or 410 (Gone) to trigger invalidation. However, such a push would need to be uncacheable (e.g,. with `Cache-Control: no-store`) to assure that the error response wasn't returned; however, this falls afoul of HTTP/2's requirement that uncacheable responses not interact with the HTTP cache.
So, we've kind of built ourselves a nice trap here.
If invalidation is an important use case, we'll need to change one of these specifications, or invent a new protocol mechanism. Maybe a `CACHE_INVALIDATE` frame?
Mark Nottingham https://www.mnot.net/
Thanks for starting this thread. I have questions about the following quote from the RFC:
On Tue, Aug 23, 2016 at 9:50 PM, Mark Nottingham <[hidden email]> wrote:
RFC7540, Section 8.2 says:
Chrome does not implement this. Some discussion starting here:
I can see why the above sentence was added to the RFC -- there needs to be some semantics for pushing immediately-stale responses, and the above sentence seems like reasonable semantics at first glance. However, I'm concerned that these semantics are not implementable in practice. On the client side, the user agent will typically store pushed responses in a side cache until they are matched with an actual request. On the server side, the server will send END_STREAM with the last DATA frame in the pushed response. This creates a race where the client may see END_STREAM before it's done enough processing to realize that it needs the pushed response. For example, consider cases where the server tries to push an "inlined" resource, but happens to push the inlined resource before the referencing HTML tag.
A server might try to avoid this race by holding the stream open, but how long should it keep the stream open? There's no way for the client to signal that a pushed response has matched a request. Further, the client cannot know (in general) if the server is holding a stream open to preserve validity of the no-cache response or because it's slow to send the final DATA frames.
I'm not sure what the right semantics are. One option is to allow the user agent optional leeway to consider the response validated for a longer period, perhaps using a timeout as Chrome does. Or, perhaps, a browser might consider a pushed response validated for the duration of the parent navigation event. Thoughts?
|Free forum by Nabble||Edit this page|