Concepts to improve Http2.0

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Concepts to improve Http2.0

Wesley Oliver
Hi,

I am not new to the concept of the IETF, however, I have yet to make an offical submission.

I would like to put forth a concept that can further improve the performance of http 2.0.
I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1. 
Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.


If you could please advise as to the path to follow.


Kind Regards,

Wesley Oliver

Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits

Abstract


Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.

Concept:

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.


This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.





--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

RE: Concepts to improve Http2.0

Lucas Pardue

Hi Wesley,

 

I had a look over your document.

 

Is the crux of your problem statement that you want to send out dynamically generated content as early as possible? Could your problem be solved by the use of chunked transfer encoding and  Trailers [1]? In HTTP/2 frame format, the simplest response would be a series of frames such as HEADERS, DATA, HEADERS (Trailers with END_STREAM flag). This is explained in more detail in RFC 7540 section 8.1 [2].

 

In the examples included in your document there are multiple “Dependent Resources” that get pushed. Are these independent static resources that the dynamic generated content refers to?

 

As far as my understanding goes the current protocol mechanisms should permit chunked transfer and push promises without needing to modify the stream life cycle. Pushed resources would sit in the client cache ready to be used by the dynamically generated content when it is received and parsed. In other words, you could achieve your proposed improvemed timing diagram with current mechanisms.

 

Regards

Lucas

 

[1] https://tools.ietf.org/html/rfc7230#section-4.1.2

[2] https://tools.ietf.org/html/rfc7540#section-8.1

 

 

From: Wesley Oliver [mailto:[hidden email]]
Sent: 27 July 2016 07:20
To: [hidden email]
Subject: Concepts to improve Http2.0

 

Hi,

 

I am not new to the concept of the IETF, however, I have yet to make an offical submission.

 

I would like to put forth a concept that can further improve the performance of http 2.0.

I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1. 

Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

 

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.

 

 

If you could please advise as to the path to follow.

 

 

Kind Regards,

 

Wesley Oliver

Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits

Abstract

 

Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.

Concept:

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.

 

This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.

 

 

 

 

--

--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]

 

----------------------------

http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.

---------------------

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Dennis Olvany
Wesley,

You may be interested in the following document.

https://tools.ietf.org/html/draft-kazuho-h2-cache-digest-01
On Wed, Jul 27, 2016 at 6:27 AM Lucas Pardue <[hidden email]> wrote:

Hi Wesley,

 

I had a look over your document.

 

Is the crux of your problem statement that you want to send out dynamically generated content as early as possible? Could your problem be solved by the use of chunked transfer encoding and  Trailers [1]? In HTTP/2 frame format, the simplest response would be a series of frames such as HEADERS, DATA, HEADERS (Trailers with END_STREAM flag). This is explained in more detail in RFC 7540 section 8.1 [2].

 

In the examples included in your document there are multiple “Dependent Resources” that get pushed. Are these independent static resources that the dynamic generated content refers to?

 

As far as my understanding goes the current protocol mechanisms should permit chunked transfer and push promises without needing to modify the stream life cycle. Pushed resources would sit in the client cache ready to be used by the dynamically generated content when it is received and parsed. In other words, you could achieve your proposed improvemed timing diagram with current mechanisms.

 

Regards

Lucas

 

[1] https://tools.ietf.org/html/rfc7230#section-4.1.2

[2] https://tools.ietf.org/html/rfc7540#section-8.1

 

 

From: Wesley Oliver [mailto:[hidden email]]
Sent: 27 July 2016 07:20
To: [hidden email]
Subject: Concepts to improve Http2.0

 

Hi,

 

I am not new to the concept of the IETF, however, I have yet to make an offical submission.

 

I would like to put forth a concept that can further improve the performance of http 2.0.

I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1. 

Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

 

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.

 

 

If you could please advise as to the path to follow.

 

 

Kind Regards,

 

Wesley Oliver

Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits

Abstract

 

Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.

Concept:

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.

 

This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.

 

 

 

 

--

--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]

 

----------------------------

http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.

---------------------

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Adrien de Croy
In reply to this post by Wesley Oliver
 
The problem with deferring headers in responses to after content, is that proxies often make policy decisions based on response headers, and therefore need these to be all up front.
 
Trailers for this reason are also a problem
 
Adrien
 
------ Original Message ------
From: "Wesley Oliver" <[hidden email]>
Sent: 27/07/2016 6:19:45 PM
Subject: Concepts to improve Http2.0
 
Hi,

I am not new to the concept of the IETF, however, I have yet to make an offical submission.

I would like to put forth a concept that can further improve the performance of http 2.0.
I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1. 
Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.


If you could please advise as to the path to follow.


Kind Regards,

Wesley Oliver

Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits

Abstract


Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.

Concept:

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.


This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.





--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Poul-Henning Kamp
--------
In message <em51dddd7f-de76-4e87-abcb-0f315b115499@bodybag>, "Adrien de Croy" w
rites:

>The problem with deferring headers in responses to after content, is=20
>that proxies often make policy decisions based on response headers, and=20
>therefore need these to be all up front.
>
>Trailers for this reason are also a problem

We talked about this in the workshop, and yes, trailers *in general*
is a problem, but the specific trailers people care about are not.

The trailers people ask for, as far as I understood:

        Etag

        Set-cookie

        Cache-Control(/Expires/Age)

They are *not* a problem.

--
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[hidden email]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Wesley Oliver
In reply to this post by Lucas Pardue
Hi,

After taking into account some of your replies to summaries this is were I believe this is going.

Typically headers can be deferred as stated, however, the concern about proxies
may potentially be valid. What I am wanting wouldn't matter as proxies
would have to update themselves to support this modification just they would have had
to updated them selves to support http 2.0.

Yes one could delay the output of the response header status code, 
but unfortunately if their is an error in the page or last minute change
then the data frame/response body would still be stale and not reflect the 
new page and status code.

Using a header continuation frame could be used as a hack, if the browser implementation allowed,
which is an alternative to the javascript page redirect while pushing content on another stream, as I proposed in
the document below:

How many browser support the concept of processing more a response body with
more than one <html> page section where the last one wins.
https://www.w3.org/TR/html5/semantics.html#semantics Refer to sections - 4.1 The root element.

I envisage the following work flow.


Request
Response Headers - for current request
Data Frame - Requested pages.
Response  Headers - Continuation - output 404, Currently the body is incorrect for the new status page and a new data frame would need to be transmitted.
DataFrame - 404 header, but the browser wouldn't understand this and would currently treat this as the same respsonse body, problem.

Concept on how I envisage the modified http 2.0 state to work.

Request
Response Headers -  for the current request.
Data Frame - Requested pages.

Optimistically send ENDHEADERS ,STARTHEADERS with the 
Response Headers - Output 404 and because this is not a continuation headers frame, it reset the stream state implicitly as it is the first headers frame.
                              The browser also then knows to reset its internal workings such that it can render a new response body for the same url.
Data Frame - Start, to mark difference between old header response body and the new header response body.
End Stream.


I still believe that the http 2.0 streaming and multiplexing life cycle changes need to be supported
it make this behavior less hacky and more concrete as if it was support.

Does any one differ in their understanding.

What would the next steps be for this concept in moving it forward into
an offical request.

Kind Regards,

Wesley Oliver



In message <em51dddd7f-de76-4e87-abcb-0f315b115499@bodybag>, "Adrien de Croy" w
rites:

>The problem with deferring headers in responses to after content, is=20
>that proxies often make policy decisions based on response headers, and=20
>therefore need these to be all up front.
>
>Trailers for this reason are also a problem

We talked about this in the workshop, and yes, trailers *in general*
is a problem, but the specific trailers people care about are not.

The trailers people ask for, as far as I understood:

        Etag

        Set-cookie

        Cache-Control(/Expires/Age)

They are *not* a problem.

---------------------------------------------------------------

The problem with deferring headers in responses to after content, is that proxies often make policy decisions based on response headers, and therefore need these to be all up front.
 
Trailers for this reason are also a problem
 
Adrien

-------------------------------------------------------------

On Wed, Jul 27, 2016 at 3:33 PM, Lucas Pardue <[hidden email]> wrote:

Hi,

 

HTTP/2 does not use the status line, instead it defines the :status pseudo-header field the MUST be included in a response. This carries the status code only. 

 

With chunked transfer encoding you could return a HEADERS with 200 status and END_HEADERS set, start sending body bytes across multiple DATA frames, and finish by signalling completion with a final HEADERS with END_HEADERS and END_STREAM set (this is the HTTP Trailer). The browser should process the body bytes as it receives them i.e. adding data to the DOM.

 

This method should in theory work equally well for dynamically generated pushed resources. The server should multiplex all of the frames across the connection.

 

In the failure case, when an error occurs you could add a custom header field to indicate the nature of the failure (e.g. http://engineering.pivotal.io/post/http-trailers/).

 

Your initial proposal sounds like a modification/extension of HTTP semantics  (RFC 7540 section 5.5.), in HTTP/2 that MUST require a negotiation before being used. The negotiation could be implemented via exchange of SETTINGS frames, but that is going to introduce addition round-trips and probably negate some of the benefit of reducing latency.

 

Lucas

 

From: Wesley Oliver [mailto:[hidden email]]
Sent: 27 July 2016 13:46
To: Lucas Pardue <[hidden email]>
Subject: Re: Concepts to improve Http2.0

 

Hi,

 

Yes I would like to push out dynamically generated content as soon as possible,

weather the dynamically generated content originates from a push response or the current request response.

The server would then alternate transmission between the dynamically generate push response or current request responses.

 

 

 

It is not very clearly to me right now, given that http 2.0 is wrapping the existing protocol weather the newer

http2.0 protocol will relax the strict requirement that the Status Line could be in a continuation frame as well

as the Field Order of  header sections.

 

If you read the extracts from http1.1 below, their is more to this in the details of the headers.

 

Ideally I would like to be optimistic and send a response with status of 200 and the dynamically generated message

body sections as become available with out buffering, if something were to go wrong, then I would like to send a response that

would override the current response message with a one such as a 403 or 308 status code.

 

In the details of the headers section for http1.1 the browser wouldn't be able to display the page

in the interim or possibly start passing the contents of the body as describe below.

I would like for browser potentially in the future to be able to start passing the body.

As you can see my very last diagram where I indicate the non-blocking behavior for the body,

were it is immediately processed. 

 

As per the sections of http1.1 in of Field Order below, more attention to the details may have to be looked at

and recommendations on what would and wouldn't result in blocking the client side browser from starting the process

the body/message before the final headers are received.

 

If the browser is not able to processes the message/body as soon as possible, we may

have improved bandwidth over the wire, however, their will still be an upfront render latency and the peak workload

that is created(battery consumption). Were the processing of the message body could have been parsed and render in the mean time.

 

I hope that this provides a bit more context to the level of optimization that I would like to have opened up.

 

Kind Regards,

 

Wesley Oliver

 

 

 

Fielding & Reschke           Standards Track                    [Page 7]

 
RFC 7230           HTTP/1.1 Message Syntax and Routing         June 2014
 
 
   A server responds to a client's request by sending one or more HTTP
   response messages, each beginning with a status line that includes
   the protocol version, a success or error code, and textual reason
   phrase (Section 3.1.2), possibly followed by header fields containing
   server information, resource metadata, and representation metadata
   (Section 3.2), an empty line to indicate the end of the header
   section, and finally a message body containing the payload body (if
   any, Section 3.3).

 

 

3.1.2.  Status Line

 
 
   The first line of a response message is the status-line, consisting
   of the protocol version, a space (SP), the status code, another
   space, a possibly empty textual phrase describing the status code,
   and ending with CRLF.
 
     status-line = HTTP-version SP status-code SP reason-phrase CRLF
 
   The status-code element is a 3-digit integer code describing the
   result of the server's attempt to understand and satisfy the client's
   corresponding request.  The rest of the response message is to be
   interpreted in light of the semantics defined for that status code.
   See Section 6 of [RFC7231] for information about the semantics of
   status codes, including the classes of status code (indicated by the
   first digit), the status codes defined by this specification,
   considerations for the definition of new status codes, and the IANA
   registry.
 
     status-code    = 3DIGIT
 
   The reason-phrase element exists for the sole purpose of providing a
   textual description associated with the numeric status code, mostly
   out of deference to earlier Internet application protocols that were
   more frequently used with interactive text clients.  A client SHOULD
   ignore the reason-phrase content.
 
     reason-phrase  = *( HTAB / SP / VCHAR / obs-text )

 

3.2.2.  Field Order

 
 
   The order in which header fields with differing field names are
   received is not significant.  However, it is good practice to send
   header fields that contain control data first, such as Host on
   requests and Date on responses, so that implementations can decide
   when not to handle a message as early as possible.  A server MUST NOT
   apply a request to the target resource until the entire request
 
 
 
Fielding & Reschke           Standards Track                   [Page 23]

 
RFC 7230           HTTP/1.1 Message Syntax and Routing         June 2014
 
 
   header section is received, since later header fields might include
   conditionals, authentication credentials, or deliberately misleading
   duplicate header fields that would impact request processing.
 
   A sender MUST NOT generate multiple header fields with the same field
   name in a message unless either the entire field value for that
   header field is defined as a comma-separated list [i.e., #(values)]
   or the header field is a well-known exception (as noted below).
 
   A recipient MAY combine multiple header fields with the same field
   name into one "field-name: field-value" pair, without changing the
   semantics of the message, by appending each subsequent field value to
   the combined field value in order, separated by a comma.  The order
   in which header fields with the same field name are received is
   therefore significant to the interpretation of the combined field
   value; a proxy MUST NOT change the order of these field values when
   forwarding a message.

 

 

 

On Wed, Jul 27, 2016 at 12:21 PM, Lucas Pardue <[hidden email]> wrote:

Hi Wesley,

 

I had a look over your document.

 

Is the crux of your problem statement that you want to send out dynamically generated content as early as possible? Could your problem be solved by the use of chunked transfer encoding and  Trailers [1]? In HTTP/2 frame format, the simplest response would be a series of frames such as HEADERS, DATA, HEADERS (Trailers with END_STREAM flag). This is explained in more detail in RFC 7540 section 8.1 [2].

 

In the examples included in your document there are multiple “Dependent Resources” that get pushed. Are these independent static resources that the dynamic generated content refers to?

 

As far as my understanding goes the current protocol mechanisms should permit chunked transfer and push promises without needing to modify the stream life cycle. Pushed resources would sit in the client cache ready to be used by the dynamically generated content when it is received and parsed. In other words, you could achieve your proposed improvemed timing diagram with current mechanisms.

 

Regards

Lucas

 

[1] https://tools.ietf.org/html/rfc7230#section-4.1.2

[2] https://tools.ietf.org/html/rfc7540#section-8.1

 

 

From: Wesley Oliver [mailto:[hidden email]]
Sent: 27 July 2016 07:20
To:
[hidden email]
Subject: Concepts to improve Http2.0

 

Hi,

 

I am not new to the concept of the IETF, however, I have yet to make an offical submission.

 

I would like to put forth a concept that can further improve the performance of http 2.0.

I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1. 

Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

 

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.

 

 

If you could please advise as to the path to follow.

 

 

Kind Regards,

 

Wesley Oliver

Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits

Abstract

 

Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.

Concept:

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.

 

This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.

 

 

 

 

--

--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]

 

----------------------------

http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.

---------------------



 

--

--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]




--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Cory Benfield

On 29 Jul 2016, at 07:35, Wesley Oliver <[hidden email]> wrote:

Using a header continuation frame could be used as a hack, if the browser implementation allowed,
which is an alternative to the javascript page redirect while pushing content on another stream, as I proposed in
the document below:

I am strongly opposed to repurposing CONTINUATION in this way. Doing so requires a negotiated HTTP/2 extension because it changes the semantics of an existing frame, so it has just as much work as doing something that doesn’t overload CONTINUATION.

Concept on how I envisage the modified http 2.0 state to work.

Request
Response Headers -  for the current request.
Data Frame - Requested pages.

Optimistically send ENDHEADERS ,STARTHEADERS with the 
Response Headers - Output 404 and because this is not a continuation headers frame, it reset the stream state implicitly as it is the first headers frame.
                              The browser also then knows to reset its internal workings such that it can render a new response body for the same url.
Data Frame - Start, to mark difference between old header response body and the new header response body.
End Stream.

This flow requires negotiation. This is because currently a second HEADERS frame after the initial response header represents trailers, and therefore must carry END_STREAM (either directly on the HEADERS frame or on the last of its CONTINUATION frames): it does not “reset the stream state implicitly". Most implementations will enforce this restriction, and so you’d need a negotiated extension (as per RFC 7540 Section 5.5) to get this to work. Probably this extension would define a new HEADERS flag (START_AGAIN) that simply transitions the stream back to a pre-data-sending state.


Cory
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Amos Jeffries-2
In reply to this post by Poul-Henning Kamp
On 28/07/2016 6:30 p.m., Poul-Henning Kamp wrote:

> --------
> In message <em51dddd7f-de76-4e87-abcb-0f315b115499@bodybag>, "Adrien de Croy" w
> rites:
>
>> The problem with deferring headers in responses to after content, is=20
>> that proxies often make policy decisions based on response headers, and=20
>> therefore need these to be all up front.
>>
>> Trailers for this reason are also a problem
>
> We talked about this in the workshop, and yes, trailers *in general*
> is a problem, but the specific trailers people care about are not.
>
> The trailers people ask for, as far as I understood:
>
> Etag
>
> Set-cookie
>
> Cache-Control(/Expires/Age)
>
> They are *not* a problem.
>

Technically true. But those last three are exceedingly annoying if
pushed into Trailers. Verging on being an outright attack. Since we
reserve cache space and do a lot of storage activity before finding out
whether its actually not cacheable after all. Usually something else
potentially useful got discarded to make room for it as well.

Amos


Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Wesley Oliver
In reply to this post by Cory Benfield
Hi,

I am hearing you Cory Benfield, the simple idea of header with START_AGAIN,
would be a simple enough extensions to the http 2.0 functionality to achieve this outcome.

I see that the documentation say nothing about how the negotiation is to happen.


5.5. Extending HTTP/2


   This document doesn't mandate a specific method for negotiating the
   use of an extension but notes that a setting (Section 6.5.2) could be
   used for that purpose.  If both peers set a value that indicates
   willingness to use the extension, then the extension can be used.  If


I would envisage the following negotiation.

As per the tradition of http 1.0 we can just add an additional request-header-field 
to indicate support for START_AGAIN and then also added frame-setting for this.

The reasons for it being in the request-header-field,
is the server needs to decided weather or not to buffer the response immediately.
Especially for the first request on a connection to a new ip(domain).
Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.


The browser would then additional communicate the HTTP2.0 setting frame support
for the entire connection life span and then upon confirmation stop 
adding the request-header-field START_AGAIN support header.

Would the negotiation describe above be found acceptable?

Kind Regards,

Wesley Oliver






On Fri, Jul 29, 2016 at 9:28 AM, Cory Benfield <[hidden email]> wrote:

On 29 Jul 2016, at 07:35, Wesley Oliver <[hidden email]> wrote:

Using a header continuation frame could be used as a hack, if the browser implementation allowed,
which is an alternative to the javascript page redirect while pushing content on another stream, as I proposed in
the document below:

I am strongly opposed to repurposing CONTINUATION in this way. Doing so requires a negotiated HTTP/2 extension because it changes the semantics of an existing frame, so it has just as much work as doing something that doesn’t overload CONTINUATION.

Concept on how I envisage the modified http 2.0 state to work.

Request
Response Headers -  for the current request.
Data Frame - Requested pages.

Optimistically send ENDHEADERS ,STARTHEADERS with the 
Response Headers - Output 404 and because this is not a continuation headers frame, it reset the stream state implicitly as it is the first headers frame.
                              The browser also then knows to reset its internal workings such that it can render a new response body for the same url.
Data Frame - Start, to mark difference between old header response body and the new header response body.
End Stream.

This flow requires negotiation. This is because currently a second HEADERS frame after the initial response header represents trailers, and therefore must carry END_STREAM (either directly on the HEADERS frame or on the last of its CONTINUATION frames): it does not “reset the stream state implicitly". Most implementations will enforce this restriction, and so you’d need a negotiated extension (as per RFC 7540 Section 5.5) to get this to work. Probably this extension would define a new HEADERS flag (START_AGAIN) that simply transitions the stream back to a pre-data-sending state.


Cory



--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Cory Benfield

On 29 Jul 2016, at 09:31, Wesley Oliver <[hidden email]> wrote:

I see that the documentation say nothing about how the negotiation is to happen.

In this case, a setting is necessary: a header field is not good enough. This is because this functionality requires that all entities on the connection (intermediaries too) understand the change this makes to the H2 stream state machine. That works when transmitted on a SETTINGS frame because each hop of the connection that is actually participating in the H2 connection needs to look at the SETTINGS frame and respond appropriately. Header fields, however, may be passed through to the endpoint, which leads to a situation where the client and server can both do this but the intermediary cannot, and the intermediary mangles or otherwise terminates the connection.

Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.

The server needs to do this anyway. The start of a HTTP/2 connection involves both parties sending SETTINGS frames. The server cannot receive the first HEADERS frame without having previously received a SETTINGS from the client that would be offering support for this functionality.

Cory

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Mark Nottingham-2
In reply to this post by Amos Jeffries-2
On 29 Jul 2016, at 9:50 AM, Amos Jeffries <[hidden email]> wrote:

>
> On 28/07/2016 6:30 p.m., Poul-Henning Kamp wrote:
>> --------
>> In message <em51dddd7f-de76-4e87-abcb-0f315b115499@bodybag>, "Adrien de Croy" w
>> rites:
>>
>>> The problem with deferring headers in responses to after content, is=20
>>> that proxies often make policy decisions based on response headers, and=20
>>> therefore need these to be all up front.
>>>
>>> Trailers for this reason are also a problem
>>
>> We talked about this in the workshop, and yes, trailers *in general*
>> is a problem, but the specific trailers people care about are not.
>>
>> The trailers people ask for, as far as I understood:
>>
>> Etag
>>
>> Set-cookie
>>
>> Cache-Control(/Expires/Age)
>>
>> They are *not* a problem.
>>
>
> Technically true. But those last three are exceedingly annoying if
> pushed into Trailers. Verging on being an outright attack. Since we
> reserve cache space and do a lot of storage activity before finding out
> whether its actually not cacheable after all. Usually something else
> potentially useful got discarded to make room for it as well.

Trailer: ETag would probably be a good hint about that...

--
Mark Nottingham   https://www.mnot.net/





Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Wesley Oliver
In reply to this post by Cory Benfield
Hi,

As per the spesification I dont'
see any requirement that the SETTINGS Frame has to be transmitted first.


On Fri, Jul 29, 2016 at 10:58 AM, Cory Benfield <[hidden email]> wrote:

On 29 Jul 2016, at 09:31, Wesley Oliver <[hidden email]> wrote:

I see that the documentation say nothing about how the negotiation is to happen.

In this case, a setting is necessary: a header field is not good enough. This is because this functionality requires that all entities on the connection (intermediaries too) understand the change this makes to the H2 stream state machine. That works when transmitted on a SETTINGS frame because each hop of the connection that is actually participating in the H2 connection needs to look at the SETTINGS frame and respond appropriately. Header fields, however, may be passed through to the endpoint, which leads to a situation where the client and server can both do this but the intermediary cannot, and the intermediary mangles or otherwise terminates the connection.

Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.

The server needs to do this anyway. The start of a HTTP/2 connection involves both parties sending SETTINGS frames. The server cannot receive the first HEADERS frame without having previously received a SETTINGS from the client that would be offering support for this functionality.

Cory




--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Amos Jeffries-2
On 29/07/2016 11:13 p.m., Wesley Oliver wrote:
> Hi,
>
> As per the spesification I dont'
> see any requirement that the SETTINGS Frame has to be transmitted first.
>

RFC 7540 section 3.5 paragraphs 3, 4, and 5

"
   The client connection preface ... MUST be followed by a
   SETTINGS frame (Section 6.5), ...

   The server connection preface consists of ...
   SETTINGS frame (Section 6.5) that MUST be the first frame the server
   sends in the HTTP/2 connection.

   The SETTINGS frames received from a peer as part of the connection
   preface MUST be acknowledged (see Section 6.5.3) after sending the
   connection preface.
"


HTH
Amos


Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Amos Jeffries-2
In reply to this post by Mark Nottingham-2
On 29/07/2016 11:07 p.m., Mark Nottingham wrote:

> On 29 Jul 2016, at 9:50 AM, Amos Jeffries wrote:
>>
>> On 28/07/2016 6:30 p.m., Poul-Henning Kamp wrote:
>>> --------
>>> In message <em51dddd7f-de76-4e87-abcb-0f315b115499@bodybag>, "Adrien de Croy" w
>>> rites:
>>>
>>>> The problem with deferring headers in responses to after content, is=20
>>>> that proxies often make policy decisions based on response headers, and=20
>>>> therefore need these to be all up front.
>>>>
>>>> Trailers for this reason are also a problem
>>>
>>> We talked about this in the workshop, and yes, trailers *in general*
>>> is a problem, but the specific trailers people care about are not.
>>>
>>> The trailers people ask for, as far as I understood:
>>>
>>> Etag
>>>
>>> Set-cookie
>>>
>>> Cache-Control(/Expires/Age)
>>>
>>> They are *not* a problem.
>>>
>>
>> Technically true. But those last three are exceedingly annoying if
>> pushed into Trailers. Verging on being an outright attack. Since we
>> reserve cache space and do a lot of storage activity before finding out
>> whether its actually not cacheable after all. Usually something else
>> potentially useful got discarded to make room for it as well.
>
> Trailer: ETag would probably be a good hint about that...
>

By the last three I was meaning "Cache-Control(/Expires/Age)" in PHK's list.

Taking a second thought about it there are also some hidden security
considerations around potentially storing the reply to non-volatile
storage when a 'Cache-Control:no-store' is deferred to Trailers.

Amos


Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Wesley Oliver
In reply to this post by Wesley Oliver
Hi,

Sorry I missed that interpretation from the following
and the fact the life cycle state diagram didn't have that requirement in it.

5. Streams and Multiplexing



  The order in which frames are sent on a stream is significant.
      Recipients process frames in the order they are received.  In
      particular, the order of HEADERS and DATA frames is semantically
      significant.

Sections:
6.5.  SETTINGS

   A SETTINGS frame MUST be sent by both endpoints at the start of a
   connection and MAY be sent at any other time by either endpoint over
   the lifetime of the connection.  Implementations MUST support all of
   the parameters defined by this specification.


So typically their would be no problem in just using the SETTINGS frame then,
to communicate that this functionality is support by the receiving peer.

I can see why the intermediate proxies would have a problem and would require a round trip.
However, intermediate proxies can should be allowed to modify settings frames as the pass thought it,
downgrading the response to what the intermediately supports, which means,
their wouldn't need to be a round trip confirmation as the server would always
know the highest supported settings.

The client browser should support all previous downgraded settings values.

This potentially may not fit with all existing settings, meaning we may require
categorizing settings into classes or their behavior/side-affects. So that certain settings may
be optimistically overridden by intermediaries.

I will look into this a little later on which settings would be affected by an optimistic composition
approach covered in sections 6.5.2 Defined Settings Parameters


Kind Regards,

Wesley Oliver



On Fri, Jul 29, 2016 at 1:13 PM, Wesley Oliver <[hidden email]> wrote:
Hi,

As per the spesification I dont'
see any requirement that the SETTINGS Frame has to be transmitted first.


On Fri, Jul 29, 2016 at 10:58 AM, Cory Benfield <[hidden email]> wrote:

On 29 Jul 2016, at 09:31, Wesley Oliver <[hidden email]> wrote:

I see that the documentation say nothing about how the negotiation is to happen.

In this case, a setting is necessary: a header field is not good enough. This is because this functionality requires that all entities on the connection (intermediaries too) understand the change this makes to the H2 stream state machine. That works when transmitted on a SETTINGS frame because each hop of the connection that is actually participating in the H2 connection needs to look at the SETTINGS frame and respond appropriately. Header fields, however, may be passed through to the endpoint, which leads to a situation where the client and server can both do this but the intermediary cannot, and the intermediary mangles or otherwise terminates the connection.

Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.

The server needs to do this anyway. The start of a HTTP/2 connection involves both parties sending SETTINGS frames. The server cannot receive the first HEADERS frame without having previously received a SETTINGS from the client that would be offering support for this functionality.

Cory




--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]



--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Wesley Oliver
Hi,

Is the http 2.0 state protocol meant to have been kept simple enough
to low level layers to directly understand the protocol at packet possibly frame level inspection.

Kind Regards,

Wesley Oliver

On Fri, Jul 29, 2016 at 1:49 PM, Wesley Oliver <[hidden email]> wrote:
Hi,

Sorry I missed that interpretation from the following
and the fact the life cycle state diagram didn't have that requirement in it.

5. Streams and Multiplexing



  The order in which frames are sent on a stream is significant.
      Recipients process frames in the order they are received.  In
      particular, the order of HEADERS and DATA frames is semantically
      significant.

Sections:
6.5.  SETTINGS

   A SETTINGS frame MUST be sent by both endpoints at the start of a
   connection and MAY be sent at any other time by either endpoint over
   the lifetime of the connection.  Implementations MUST support all of
   the parameters defined by this specification.


So typically their would be no problem in just using the SETTINGS frame then,
to communicate that this functionality is support by the receiving peer.

I can see why the intermediate proxies would have a problem and would require a round trip.
However, intermediate proxies can should be allowed to modify settings frames as the pass thought it,
downgrading the response to what the intermediately supports, which means,
their wouldn't need to be a round trip confirmation as the server would always
know the highest supported settings.

The client browser should support all previous downgraded settings values.

This potentially may not fit with all existing settings, meaning we may require
categorizing settings into classes or their behavior/side-affects. So that certain settings may
be optimistically overridden by intermediaries.

I will look into this a little later on which settings would be affected by an optimistic composition
approach covered in sections 6.5.2 Defined Settings Parameters


Kind Regards,

Wesley Oliver



On Fri, Jul 29, 2016 at 1:13 PM, Wesley Oliver <[hidden email]> wrote:
Hi,

As per the spesification I dont'
see any requirement that the SETTINGS Frame has to be transmitted first.


On Fri, Jul 29, 2016 at 10:58 AM, Cory Benfield <[hidden email]> wrote:

On 29 Jul 2016, at 09:31, Wesley Oliver <[hidden email]> wrote:

I see that the documentation say nothing about how the negotiation is to happen.

In this case, a setting is necessary: a header field is not good enough. This is because this functionality requires that all entities on the connection (intermediaries too) understand the change this makes to the H2 stream state machine. That works when transmitted on a SETTINGS frame because each hop of the connection that is actually participating in the H2 connection needs to look at the SETTINGS frame and respond appropriately. Header fields, however, may be passed through to the endpoint, which leads to a situation where the client and server can both do this but the intermediary cannot, and the intermediary mangles or otherwise terminates the connection.

Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.

The server needs to do this anyway. The start of a HTTP/2 connection involves both parties sending SETTINGS frames. The server cannot receive the first HEADERS frame without having previously received a SETTINGS from the client that would be offering support for this functionality.

Cory




--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]



--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]



--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Patrick McManus
In reply to this post by Amos Jeffries-2
On Fri, Jul 29, 2016 at 7:40 AM, Amos Jeffries <[hidden email]> wrote:

Taking a second thought about it there are also some hidden security
considerations around potentially storing the reply to non-volatile
storage when a 'Cache-Control:no-store' is deferred to Trailers.


Could indeed be true - that was part of the workshop discussion too. There seemed to be general confidence that trailers could be exposed as separate connection specific meta-data (i.e. here are your trailers that might contain some debugging - they aren't headers) but whether or not they could ever be treated semantically as headers (either generally or in specific cases - which might have different answers) needed more work to determine.
 
Given that it is a connection level mechanism it might not be terribly helpful though.

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Patrick McManus
In reply to this post by Lucas Pardue
Hey Wesley - this is just my opinion,

Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.

 


You might find a better fit if you up level some of this to the application while leaning on some H2 features to make it work well.. For instance, JS let's you totally rewrite your DOM based on same-origin content obtained from xhr/fetch.. This is not dissimilar to your concept of an html page with multiple roots. Its not hard to imagine how to stitch these things together with a liberal dose of HTTP/2 Push for the dynamic bits and good priortization for the dynamic xhr/fetch data in a very responsive and high performing way that doesn't require buffering on either end.

Approaches that wrap more stuff into the same http transaction, and have to make the transport jump through hoops to do it, probably go against the tide.

-Patrick

Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Matthew Kerwin
In reply to this post by Wesley Oliver
Hi, just a couple of points here:

On 29 July 2016 at 21:49, Wesley Oliver <[hidden email]> wrote:
Sorry I missed that interpretation from the following
and the fact the life cycle state diagram didn't have that requirement in it.


​The diagram is of the lifecycle of a stream; the initial SETTINGS is part of the lifecycle of the connection.​

 
​<snip>

I can see why the intermediate proxies would have a problem and would require a round trip.
However, intermediate proxies can should be allowed to modify settings frames as the pass thought it,
downgrading the response to what the intermediately supports, which means,
their wouldn't need to be a round trip confirmation as the server would always
know the highest supported settings.

​Settings are hop-by-hop, not end-to-end
w
hat a browser advertises to a proxy in a SETTINGS frame has little to no bearing on what the proxy advertises to the server
​, and vice versa in the other direction
.

And I think that's still fair enough. If a proxy is willing to buffer an entire stream and rearrange everything so it looks kosher then it doesn't matter if the downstream peer wouldn't have accepted the replayed messages/overriding trailers/whatever.

That said, I still think there's a smell here. I'm going to go out on a limb, drawing on my years as a PHP developer, to say that the primary use case for this proposal is to allow the application developer to catch an error while generating a response, and change the :status from 200 to 500 (or similar). In the best case the browser gets the 200 response straight away and starts receiving response body chunks as they're generated, as happens now without server-side buffering. However if something goes wrong, the browser ... what? Receives an EOF on the response, then gets a "hang on, replace all that with a 500", so it dumps the partially-rendered document and starts displaying the incoming error document? Surely that's not good UX. It feels to me like, if your application might throw such an exception mid-response, you'd be best buffering it yourself. If it's a cacheable response, you can at least then put in appropriate Expires/ETags/etc. headers and let a cache optimise subsequent requests for you (or even manually cache it yourself serverside.)

Cheers
--
  Matthew Kerwin
  http://matthew.kerwin.net.au/

On 29 July 2016 at 21:49, Wesley Oliver <[hidden email]> wrote:
Hi,

Sorry I missed that interpretation from the following
and the fact the life cycle state diagram didn't have that requirement in it.

5. Streams and Multiplexing



  The order in which frames are sent on a stream is significant.
      Recipients process frames in the order they are received.  In
      particular, the order of HEADERS and DATA frames is semantically
      significant.

Sections:
6.5.  SETTINGS

   A SETTINGS frame MUST be sent by both endpoints at the start of a
   connection and MAY be sent at any other time by either endpoint over
   the lifetime of the connection.  Implementations MUST support all of
   the parameters defined by this specification.


So typically their would be no problem in just using the SETTINGS frame then,
to communicate that this functionality is support by the receiving peer.

I can see why the intermediate proxies would have a problem and would require a round trip.
However, intermediate proxies can should be allowed to modify settings frames as the pass thought it,
downgrading the response to what the intermediately supports, which means,
their wouldn't need to be a round trip confirmation as the server would always
know the highest supported settings.

The client browser should support all previous downgraded settings values.

This potentially may not fit with all existing settings, meaning we may require
categorizing settings into classes or their behavior/side-affects. So that certain settings may
be optimistically overridden by intermediaries.

I will look into this a little later on which settings would be affected by an optimistic composition
approach covered in sections 6.5.2 Defined Settings Parameters


Kind Regards,

Wesley Oliver



On Fri, Jul 29, 2016 at 1:13 PM, Wesley Oliver <[hidden email]> wrote:
Hi,

As per the spesification I dont'
see any requirement that the SETTINGS Frame has to be transmitted first.


On Fri, Jul 29, 2016 at 10:58 AM, Cory Benfield <[hidden email]> wrote:

On 29 Jul 2016, at 09:31, Wesley Oliver <[hidden email]> wrote:

I see that the documentation say nothing about how the negotiation is to happen.

In this case, a setting is necessary: a header field is not good enough. This is because this functionality requires that all entities on the connection (intermediaries too) understand the change this makes to the H2 stream state machine. That works when transmitted on a SETTINGS frame because each hop of the connection that is actually participating in the H2 connection needs to look at the SETTINGS frame and respond appropriately. Header fields, however, may be passed through to the endpoint, which leads to a situation where the client and server can both do this but the intermediary cannot, and the intermediary mangles or otherwise terminates the connection.

Otherwise it would have to wait for the settings frame communication to have proceed first,
which then introduce latency for client side and would result in the server having to block
before it could response, clearly a degradation of performance.

The server needs to do this anyway. The start of a HTTP/2 connection involves both parties sending SETTINGS frames. The server cannot receive the first HEADERS frame without having previously received a SETTINGS from the client that would be offering support for this functionality.

Cory




--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]



--
--
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: [hidden email]



--
  Matthew Kerwin
  http://matthew.kerwin.net.au/
Reply | Threaded
Open this post in threaded view
|

Re: Concepts to improve Http2.0

Martin J. Dürst
In reply to this post by Patrick McManus
On 2016/07/29 21:49, Patrick McManus wrote:
> Hey Wesley - this is just my opinion,

Well, mine to. Thanks, Patrick, for putting it so well. We already have
all the necessary stuff with JS, no need to duplicate that one layer below.

Regards,   Martin.

> Allow multiple response to be sent over the wire for the same request,
>> whereby the last response that has been transmitted over the wire, will
>> form the official response that will be permanently rendered in the client
>> browser.
>>
>>
>>
>
> You might find a better fit if you up level some of this to the application
> while leaning on some H2 features to make it work well.. For instance, JS
> let's you totally rewrite your DOM based on same-origin content obtained
> from xhr/fetch.. This is not dissimilar to your concept of an html page
> with multiple roots. Its not hard to imagine how to stitch these things
> together with a liberal dose of HTTP/2 Push for the dynamic bits and good
> priortization for the dynamic xhr/fetch data in a very responsive and high
> performing way that doesn't require buffering on either end.
>
> Approaches that wrap more stuff into the same http transaction, and have to
> make the transport jump through hoops to do it, probably go against the
> tide.
>
> -Patrick
>

12