Client Certificates - re-opening discussion

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
39 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Client Certificates - re-opening discussion

Mark Nottingham-2
Hi,

We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.

I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.

If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.

Cheers,

--
Mark Nottingham   https://www.mnot.net/





Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Martin Thomson-3
There is work ongoing in TLS 1.3 that I can report on with greater
certainty after the interim completes next week.

On 17 September 2015 at 15:10, Mark Nottingham <[hidden email]> wrote:

> Hi,
>
> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>
> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>
> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>
> Cheers,
>
> --
> Mark Nottingham   https://www.mnot.net/
>
>
>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Henry Story-4
In reply to this post by Mark Nottingham-2

> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
>
> Hi,
>
> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>
> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>
> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>
> Cheers,
>
> --
> Mark Nottingham   https://www.mnot.net/


Apart from the proposals as the proposal by Martin Thomson
and the follow up work  referenced earlier in this thread
by Mike Bishop [1], I'd like to mention more HTTP centric
prototypes which would rely perhaps not so much on certificates,
but on linked public keys, that build on existing HTTP
mechanisms such as WWW-Authenticate, which if they pass security
scrutiny would fit nicely it seems to me with HTTP/2.0 .

• Andrei Sambra's first sketch authentication protocol          
  https://github.com/solid/solid-spec#webid-rsa

• Manu Sporny's more fully fleshed out HTTP Message signature    
  https://tools.ietf.org/html/draft-cavage-http-signatures-04

These and the more TLS centric protocols require the user
agent to be able to use public/private keys generated by
the agent, and signed  or published by that origin, to
authenticate or sign documents across origins.

This is where one often runs into the Same Origin Policy (SOP)
stone wall. There was an important discussion on
[hidden email] [1] and [hidden email]
entitled

  "A Somewhat Critical View of SOP (Same Origin Policy)" [2]

that I think has helped clarify the distinction between Same Origin
Policy, Linkability, Privacy and User Control, and which I hope
has helped show that this policy cannot be applied without
care nor can it apply everywhere.

The arguments developed there should be helpful in opening discussion
here and elswhere too I think. In a couple of e-mails  in that
thread, I went into great detail showing how SOP, linkability and User
Control and privacy apply in very different ways to 4 technologies:
Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
that the concepts don't overlap, two being technical and the two
legal/philosophical, each technology enabling some aspect of the
other, and not always the way one would expect.

Having made those conceptual distinctions I think the path to
acceptance of solutions proposed by this group will be much eased.

Looking forward to following and testing work developed here,

All the best,

        Henry


[1] • starting:
    https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
     • most recent by Mike Bishop  
   https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
[2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
[3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
 which is in part summarised with respect to FIDO in a much shorter
 email
   https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html

Social Web Architect
http://bblfish.net/


Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Mark Nottingham-2
In reply to this post by Mark Nottingham-2
Hi Henry,

Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2. At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).

Cheers,


> On 18 Sep 2015, at 11:53 am, Henry Story <[hidden email]> wrote:
>
>
>> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
>>
>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>>
>> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>>
>> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>>
>> Cheers,
>>
>> --
>> Mark Nottingham   https://www.mnot.net/
>
>
> Apart from the proposals as the proposal by Martin Thomson
> and the follow up work  referenced earlier in this thread
> by Mike Bishop [1], I'd like to mention more HTTP centric
> prototypes which would rely perhaps not so much on certificates,
> but on linked public keys, that build on existing HTTP
> mechanisms such as WWW-Authenticate, which if they pass security
> scrutiny would fit nicely it seems to me with HTTP/2.0 .
>
> • Andrei Sambra's first sketch authentication protocol          
>   https://github.com/solid/solid-spec#webid-rsa
>
> • Manu Sporny's more fully fleshed out HTTP Message signature    
>   https://tools.ietf.org/html/draft-cavage-http-signatures-04
>
> These and the more TLS centric protocols require the user
> agent to be able to use public/private keys generated by
> the agent, and signed  or published by that origin, to
> authenticate or sign documents across origins.
>
> This is where one often runs into the Same Origin Policy (SOP)
> stone wall. There was an important discussion on
> [hidden email] [1] and [hidden email]
> entitled
>
>   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
>
> that I think has helped clarify the distinction between Same Origin
> Policy, Linkability, Privacy and User Control, and which I hope
> has helped show that this policy cannot be applied without
> care nor can it apply everywhere.
>
> The arguments developed there should be helpful in opening discussion
> here and elswhere too I think. In a couple of e-mails  in that
> thread, I went into great detail showing how SOP, linkability and User
> Control and privacy apply in very different ways to 4 technologies:
> Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> that the concepts don't overlap, two being technical and the two
> legal/philosophical, each technology enabling some aspect of the
> other, and not always the way one would expect.
>
> Having made those conceptual distinctions I think the path to
> acceptance of solutions proposed by this group will be much eased.
>
> Looking forward to following and testing work developed here,
>
> All the best,
>
> Henry
>
>
> [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
>    • most recent by Mike Bishop  
>    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
> [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
> [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
>  which is in part summarised with respect to FIDO in a much shorter
>  email
>    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html
>

--
Mark Nottingham   https://www.mnot.net/





Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Ilari Liusvaara
In reply to this post by Mark Nottingham-2
On Thu, Sep 17, 2015 at 06:10:49PM -0400, Mark Nottingham wrote:
> Hi,
>
> We've talked about client certificates in HTTP/2 (and elsewhere)
> for a while, but the discussion has stalled.
>
> If you have a proposal or thoughts that might become a proposal
> in this area, please brush it off and be prepared. Of course, we
> can discuss on-list in the meantime.

Basically, the ways I know one could do client certs in HTTP/2 have
both been floated before:

1) Signal about client cert being needed, client can establish
new connection for the authenticated stuff.

2) Do client cert at HTTP level, using the usual HTTP authentication
headers and TLS channel binding mechanisms[1] (but certificates
themselves require some special handling, due to size[2]).


[1] SPDY/3 did something like this, except with its own frame
types.

[2] Bit crazy idea: PUT with .well-known resource.


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Henry Story-4

> On 18 Sep 2015, at 18:45, Ilari Liusvaara <[hidden email]> wrote:
>
> On Thu, Sep 17, 2015 at 06:10:49PM -0400, Mark Nottingham wrote:
>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere)
>> for a while, but the discussion has stalled.
>>
>> If you have a proposal or thoughts that might become a proposal
>> in this area, please brush it off and be prepared. Of course, we
>> can discuss on-list in the meantime.
>
> Basically, the ways I know one could do client certs in HTTP/2 have
> both been floated before:
>
> 1) Signal about client cert being needed, client can establish
> new connection for the authenticated stuff.
>
> 2) Do client cert at HTTP level, using the usual HTTP authentication
> headers and TLS channel binding mechanisms[1] (but certificates
> themselves require some special handling, due to size[2]).
>
>
> [1] SPDY/3 did something like this, except with its own frame
> types.
>
> [2] Bit crazy idea: PUT with .well-known resource.

You mean: don't send the certificate, link to it on the web?
Then you are close to WebID-TLS
  http://www.w3.org/2005/Incubator/webid/spec/
WebID-TLS only published the public key, but one could
also publish the full certificate. ( people had suggested
that before, but we were waiting for larger use cases to
consider it )

The advanage following that pattern is you put the certificate
anywhere you like, not just in .well-known.

You can think of the WebID-profile document as a certificate
linked on the web. Then we are close to the WebID-RSA and
HTTP-signature proposals I mentioned earlier, but Mark pointed
out that its out of scope in this thread. I could open another
thread to discuss those when/if people are interested.

Henry

>
>
> -Ilari
>

Social Web Architect
http://bblfish.net/


Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Mike Belshe
In reply to this post by Mark Nottingham-2
In a strange twist of fate I find myself doing a lot of PKI work these days, and I've considered a fair bit about how client-certs might help with some of my application-level needs.

However, just like HTTP's basic-auth, I wonder HTTP or TLS level client-certs will just never be used?  My concern, of course, is that we build something that has a user experience similar to HTTP's basic-auth.  It's so bad that nobody can use it and authentication gets pulled into web pages (where ironically, it is less secure!).

Mark - you said there is "pain".  Is there a set of use cases to be solved here?  Let me know if I missed them - I may be able to contribute.  

My suspicion is that we really need crypto features moved up a level from the protocol, as it will be very difficult to make satisfactory user interfaces from the protocol level alone.  Perhaps for machine-to-machine auth it would be okay.

Mike





On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
Hi Henry,

Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2. At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).

Cheers,


> On 18 Sep 2015, at 11:53 am, Henry Story <[hidden email]> wrote:
>
>
>> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
>>
>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>>
>> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>>
>> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>>
>> Cheers,
>>
>> --
>> Mark Nottingham   https://www.mnot.net/
>
>
> Apart from the proposals as the proposal by Martin Thomson
> and the follow up work  referenced earlier in this thread
> by Mike Bishop [1], I'd like to mention more HTTP centric
> prototypes which would rely perhaps not so much on certificates,
> but on linked public keys, that build on existing HTTP
> mechanisms such as WWW-Authenticate, which if they pass security
> scrutiny would fit nicely it seems to me with HTTP/2.0 .
>
> • Andrei Sambra's first sketch authentication protocol
>   https://github.com/solid/solid-spec#webid-rsa
>
> • Manu Sporny's more fully fleshed out HTTP Message signature
>   https://tools.ietf.org/html/draft-cavage-http-signatures-04
>
> These and the more TLS centric protocols require the user
> agent to be able to use public/private keys generated by
> the agent, and signed  or published by that origin, to
> authenticate or sign documents across origins.
>
> This is where one often runs into the Same Origin Policy (SOP)
> stone wall. There was an important discussion on
> [hidden email] [1] and [hidden email]
> entitled
>
>   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
>
> that I think has helped clarify the distinction between Same Origin
> Policy, Linkability, Privacy and User Control, and which I hope
> has helped show that this policy cannot be applied without
> care nor can it apply everywhere.
>
> The arguments developed there should be helpful in opening discussion
> here and elswhere too I think. In a couple of e-mails  in that
> thread, I went into great detail showing how SOP, linkability and User
> Control and privacy apply in very different ways to 4 technologies:
> Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> that the concepts don't overlap, two being technical and the two
> legal/philosophical, each technology enabling some aspect of the
> other, and not always the way one would expect.
>
> Having made those conceptual distinctions I think the path to
> acceptance of solutions proposed by this group will be much eased.
>
> Looking forward to following and testing work developed here,
>
> All the best,
>
>       Henry
>
>
> [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
>    • most recent by Mike Bishop
>    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
> [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
> [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
>  which is in part summarised with respect to FIDO in a much shorter
>  email
>    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html
>

--
Mark Nottingham   https://www.mnot.net/






Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Mark Nottingham-2

> On 18 Sep 2015, at 2:20 pm, Mike Belshe <[hidden email]> wrote:
>
> In a strange twist of fate I find myself doing a lot of PKI work these days, and I've considered a fair bit about how client-certs might help with some of my application-level needs.
>
> However, just like HTTP's basic-auth, I wonder HTTP or TLS level client-certs will just never be used?  My concern, of course, is that we build something that has a user experience similar to HTTP's basic-auth.  It's so bad that nobody can use it and authentication gets pulled into web pages (where ironically, it is less secure!).
>
> Mark - you said there is "pain".  Is there a set of use cases to be solved here?  Let me know if I missed them - I may be able to contribute.  

By "pain", I meant that sites that use client certs (which turns out to be more common than many thought) want to update to HTTP/2, but can't.

Cheers,


>
> My suspicion is that we really need crypto features moved up a level from the protocol, as it will be very difficult to make satisfactory user interfaces from the protocol level alone.  Perhaps for machine-to-machine auth it would be okay.
>
> Mike
>
>
>
>
>
> On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
> Hi Henry,
>
> Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2. At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).
>
> Cheers,
>
>
> > On 18 Sep 2015, at 11:53 am, Henry Story <[hidden email]> wrote:
> >
> >
> >> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
> >>
> >> Hi,
> >>
> >> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
> >>
> >> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
> >>
> >> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
> >>
> >> Cheers,
> >>
> >> --
> >> Mark Nottingham   https://www.mnot.net/
> >
> >
> > Apart from the proposals as the proposal by Martin Thomson
> > and the follow up work  referenced earlier in this thread
> > by Mike Bishop [1], I'd like to mention more HTTP centric
> > prototypes which would rely perhaps not so much on certificates,
> > but on linked public keys, that build on existing HTTP
> > mechanisms such as WWW-Authenticate, which if they pass security
> > scrutiny would fit nicely it seems to me with HTTP/2.0 .
> >
> > • Andrei Sambra's first sketch authentication protocol
> >   https://github.com/solid/solid-spec#webid-rsa
> >
> > • Manu Sporny's more fully fleshed out HTTP Message signature
> >   https://tools.ietf.org/html/draft-cavage-http-signatures-04
> >
> > These and the more TLS centric protocols require the user
> > agent to be able to use public/private keys generated by
> > the agent, and signed  or published by that origin, to
> > authenticate or sign documents across origins.
> >
> > This is where one often runs into the Same Origin Policy (SOP)
> > stone wall. There was an important discussion on
> > [hidden email] [1] and [hidden email]
> > entitled
> >
> >   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
> >
> > that I think has helped clarify the distinction between Same Origin
> > Policy, Linkability, Privacy and User Control, and which I hope
> > has helped show that this policy cannot be applied without
> > care nor can it apply everywhere.
> >
> > The arguments developed there should be helpful in opening discussion
> > here and elswhere too I think. In a couple of e-mails  in that
> > thread, I went into great detail showing how SOP, linkability and User
> > Control and privacy apply in very different ways to 4 technologies:
> > Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> > that the concepts don't overlap, two being technical and the two
> > legal/philosophical, each technology enabling some aspect of the
> > other, and not always the way one would expect.
> >
> > Having made those conceptual distinctions I think the path to
> > acceptance of solutions proposed by this group will be much eased.
> >
> > Looking forward to following and testing work developed here,
> >
> > All the best,
> >
> >       Henry
> >
> >
> > [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
> >    • most recent by Mike Bishop
> >    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
> > [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
> > [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
> >  which is in part summarised with respect to FIDO in a much shorter
> >  email
> >    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html
> >
>
> --
> Mark Nottingham   https://www.mnot.net/
>
>
>
>
>
>

--
Mark Nottingham   https://www.mnot.net/





Reply | Threaded
Open this post in threaded view
|

RE: Client Certificates - re-opening discussion

Mike Bishop
In reply to this post by Mike Belshe

We have historically had cases where customers were either legally mandated to use client certificate authentication specifically, or more generally had an IT requirement to use two-factor authentication to access enterprise resources.  I’ll research the details of some of these, and see whether I can share some details to frame this conversation in Yokohama.  Internally, we use it regularly – the certificate lives on a smartcard, the TPM, or was simply issued to the machine when it enrolled for device management.

 

For us, at least, the “pain” is that we can’t support a legal requirement without falling back to HTTP/1.1 and generating even more round-trips.  Our HTTP/2 investments don’t apply as soon as we’re talking to the auth server.

 

From: Mike Belshe [mailto:[hidden email]]
Sent: Friday, September 18, 2015 11:20 AM
To: Mark Nottingham <[hidden email]>
Cc: Henry Story <[hidden email]>; HTTP Working Group <[hidden email]>
Subject: Re: Client Certificates - re-opening discussion

 

In a strange twist of fate I find myself doing a lot of PKI work these days, and I've considered a fair bit about how client-certs might help with some of my application-level needs.

 

However, just like HTTP's basic-auth, I wonder HTTP or TLS level client-certs will just never be used?  My concern, of course, is that we build something that has a user experience similar to HTTP's basic-auth.  It's so bad that nobody can use it and authentication gets pulled into web pages (where ironically, it is less secure!).

 

Mark - you said there is "pain".  Is there a set of use cases to be solved here?  Let me know if I missed them - I may be able to contribute.  

 

My suspicion is that we really need crypto features moved up a level from the protocol, as it will be very difficult to make satisfactory user interfaces from the protocol level alone.  Perhaps for machine-to-machine auth it would be okay.

 

Mike

 

 

 

 

 

On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:

Hi Henry,

Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2. At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).

Cheers,


> On 18 Sep 2015, at 11:53 am, Henry Story <[hidden email]> wrote:
>
>
>> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
>>

>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>>
>> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>>
>> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>>
>> Cheers,
>>
>> --
>> Mark Nottingham   https://www.mnot.net/
>
>

> Apart from the proposals as the proposal by Martin Thomson
> and the follow up work  referenced earlier in this thread
> by Mike Bishop [1], I'd like to mention more HTTP centric
> prototypes which would rely perhaps not so much on certificates,
> but on linked public keys, that build on existing HTTP
> mechanisms such as WWW-Authenticate, which if they pass security
> scrutiny would fit nicely it seems to me with HTTP/2.0 .
>
> • Andrei Sambra's first sketch authentication protocol
>   https://github.com/solid/solid-spec#webid-rsa
>
> • Manu Sporny's more fully fleshed out HTTP Message signature
>   https://tools.ietf.org/html/draft-cavage-http-signatures-04
>
> These and the more TLS centric protocols require the user
> agent to be able to use public/private keys generated by
> the agent, and signed  or published by that origin, to
> authenticate or sign documents across origins.
>
> This is where one often runs into the Same Origin Policy (SOP)
> stone wall. There was an important discussion on
> [hidden email] [1] and [hidden email]
> entitled
>
>   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
>
> that I think has helped clarify the distinction between Same Origin
> Policy, Linkability, Privacy and User Control, and which I hope
> has helped show that this policy cannot be applied without
> care nor can it apply everywhere.
>
> The arguments developed there should be helpful in opening discussion
> here and elswhere too I think. In a couple of e-mails  in that
> thread, I went into great detail showing how SOP, linkability and User
> Control and privacy apply in very different ways to 4 technologies:
> Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> that the concepts don't overlap, two being technical and the two
> legal/philosophical, each technology enabling some aspect of the
> other, and not always the way one would expect.
>
> Having made those conceptual distinctions I think the path to
> acceptance of solutions proposed by this group will be much eased.
>
> Looking forward to following and testing work developed here,
>
> All the best,
>
>       Henry
>
>
> [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
>    • most recent by Mike Bishop
>    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
> [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
> [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
>  which is in part summarised with respect to FIDO in a much shorter
>  email
>    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html
>

--
Mark Nottingham   https://www.mnot.net/




 

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Ilari Liusvaara
In reply to this post by Henry Story-4
On Fri, Sep 18, 2015 at 07:11:20PM +0100, [hidden email] wrote:
>
> You mean: don't send the certificate, link to it on the web?
> Then you are close to WebID-TLS
>   http://www.w3.org/2005/Incubator/webid/spec/
> WebID-TLS only published the public key, but one could
> also publish the full certificate. ( people had suggested
> that before, but we were waiting for larger use cases to
> consider it )

No, I meant sending the certificate chain. But if the equivalent to
the certificate chain is just a single raw public key, one could
stick it to headers (but I suppose for implementability reasons
one would not do that).

> The advanage following that pattern is you put the certificate
> anywhere you like, not just in .well-known.

Which causes all the security issues from retretiving URLs. And
also, most of the users probably won't have any place to stick
the cert to.


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Eric Rescorla-3
In reply to this post by Mark Nottingham-2


On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
Hi Henry,

Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2.

Is this a question about HTTP/2's limitations versus HTTP/1.1 or about deficiencies
in HTTP/1.1 that HTTP/2 has not fixed?

-Ekr

At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).

Cheers,


> On 18 Sep 2015, at 11:53 am, Henry Story <[hidden email]> wrote:
>
>
>> On 17 Sep 2015, at 23:10, Mark Nottingham <[hidden email]> wrote:
>>
>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>>
>> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>>
>> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>>
>> Cheers,
>>
>> --
>> Mark Nottingham   https://www.mnot.net/
>
>
> Apart from the proposals as the proposal by Martin Thomson
> and the follow up work  referenced earlier in this thread
> by Mike Bishop [1], I'd like to mention more HTTP centric
> prototypes which would rely perhaps not so much on certificates,
> but on linked public keys, that build on existing HTTP
> mechanisms such as WWW-Authenticate, which if they pass security
> scrutiny would fit nicely it seems to me with HTTP/2.0 .
>
> • Andrei Sambra's first sketch authentication protocol
>   https://github.com/solid/solid-spec#webid-rsa
>
> • Manu Sporny's more fully fleshed out HTTP Message signature
>   https://tools.ietf.org/html/draft-cavage-http-signatures-04
>
> These and the more TLS centric protocols require the user
> agent to be able to use public/private keys generated by
> the agent, and signed  or published by that origin, to
> authenticate or sign documents across origins.
>
> This is where one often runs into the Same Origin Policy (SOP)
> stone wall. There was an important discussion on
> [hidden email] [1] and [hidden email]
> entitled
>
>   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
>
> that I think has helped clarify the distinction between Same Origin
> Policy, Linkability, Privacy and User Control, and which I hope
> has helped show that this policy cannot be applied without
> care nor can it apply everywhere.
>
> The arguments developed there should be helpful in opening discussion
> here and elswhere too I think. In a couple of e-mails  in that
> thread, I went into great detail showing how SOP, linkability and User
> Control and privacy apply in very different ways to 4 technologies:
> Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> that the concepts don't overlap, two being technical and the two
> legal/philosophical, each technology enabling some aspect of the
> other, and not always the way one would expect.
>
> Having made those conceptual distinctions I think the path to
> acceptance of solutions proposed by this group will be much eased.
>
> Looking forward to following and testing work developed here,
>
> All the best,
>
>       Henry
>
>
> [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
>    • most recent by Mike Bishop
>    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html
> [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/
> [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html
>  which is in part summarised with respect to FIDO in a much shorter
>  email
>    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html
>

--
Mark Nottingham   https://www.mnot.net/






Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Ilari Liusvaara
On Fri, Sep 18, 2015 at 01:48:50PM -0700, Eric Rescorla wrote:

> On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
>
> > Hi Henry,
> >
> > Thanks, but this is a much more narrowly-scoped discussion -- how to make
> > client certs as they currently operate work in HTTP/2.
>
>
> Is this a question about HTTP/2's limitations versus HTTP/1.1 or about
> deficiencies
> in HTTP/1.1 that HTTP/2 has not fixed?

I think this is about the extra limitations of HTTP/2 regarding client
authentication caused by major design differences between HTTP/1.1 and
HTTP/2.

Client certs in HTTP/1.1 aren't too great, but at least those don't
seem to even remotely have the same problems as client certs in HTTP/2
(especially when in web environment).


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Stefan Eissing

> Am 18.09.2015 um 22:57 schrieb Ilari Liusvaara <[hidden email]>:
>
>> On Fri, Sep 18, 2015 at 01:48:50PM -0700, Eric Rescorla wrote:
>>> On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
>>>
>>> Hi Henry,
>>>
>>> Thanks, but this is a much more narrowly-scoped discussion -- how to make
>>> client certs as they currently operate work in HTTP/2.
>>
>>
>> Is this a question about HTTP/2's limitations versus HTTP/1.1 or about
>> deficiencies
>> in HTTP/1.1 that HTTP/2 has not fixed?
>
> I think this is about the extra limitations of HTTP/2 regarding client
> authentication caused by major design differences between HTTP/1.1 and
> HTTP/2.
>
> Client certs in HTTP/1.1 aren't too great, but at least those don't
> seem to even remotely have the same problems as client certs in HTTP/2
> (especially when in web environment).

Just to have everyone on the same page. The problems - as we see them in httpd - are

1. http/1.1 requests may trigger client certs which may require renegotiation. Processing is no longer  sequential with http/2, causing conflicts. Even if mutexed what does connection state and h2 stream have to say to each other and for how long?

2. connection reuse for different hosts is much more likely as a lot of sites have a long list of subjectAltNames. That raises the likelihood of conflicts as described above.

Any advice on how to address this in an interoperable way is appreciated.

//Stefan
Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Eric Rescorla-3


On Sat, Sep 19, 2015 at 12:18 AM, Stefan Eissing <[hidden email]> wrote:

> Am 18.09.2015 um 22:57 schrieb Ilari Liusvaara <[hidden email]>:
>
>> On Fri, Sep 18, 2015 at 01:48:50PM -0700, Eric Rescorla wrote:
>>> On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
>>>
>>> Hi Henry,
>>>
>>> Thanks, but this is a much more narrowly-scoped discussion -- how to make
>>> client certs as they currently operate work in HTTP/2.
>>
>>
>> Is this a question about HTTP/2's limitations versus HTTP/1.1 or about
>> deficiencies
>> in HTTP/1.1 that HTTP/2 has not fixed?
>
> I think this is about the extra limitations of HTTP/2 regarding client
> authentication caused by major design differences between HTTP/1.1 and
> HTTP/2.
>
> Client certs in HTTP/1.1 aren't too great, but at least those don't
> seem to even remotely have the same problems as client certs in HTTP/2
> (especially when in web environment).

Just to have everyone on the same page. The problems - as we see them in httpd - are

1. http/1.1 requests may trigger client certs which may require renegotiation. Processing is no longer  sequential with http/2, causing conflicts.

Well, presently renegotiation is illegal in HTTP/2, so this is a non-problem.

However, I suppose if we land TLS 1.3 PR#209 it will come back.

-Ekr
 
Even if mutexed what does connection state and h2 stream have to say to each other and for how long?

2. connection reuse for different hosts is much more likely as a lot of sites have a long list of subjectAltNames. That raises the likelihood of conflicts as described above.

Any advice on how to address this in an interoperable way is appreciated.

//Stefan

Reply | Threaded
Open this post in threaded view
|

RE: Client Certificates - re-opening discussion

Mike Bishop

Kind of a non-problem, but it’s also the problem itself.  The HTTP layer will call different APIs in TLS, but the API HTTP exposes (get client certificate) won’t necessarily change.

·        HTTP/1.x + TLS <=1.2 – Client certs work via renegotiation

·        HTTP/x + TLS 1.3 – Client certs work via new TLS feature that isn’t renegotiation

·        HTTP/2 + TLS 1.2 – How do client certs work?

 

It’s a time-scoped problem, since we hope everyone will eventually be on TLS 1.3, but it’s a nearly-universal problem at the moment.  There are many proposed kludges for HTTP/2 over TLS 1.2 in the meantime, but we need to find something with broader support than any idea currently has.

 

From: Eric Rescorla [mailto:[hidden email]]
Sent: Saturday, September 19, 2015 11:18 AM
To: Stefan Eissing <[hidden email]>
Cc: Ilari Liusvaara <[hidden email]>; Mark Nottingham <[hidden email]>; Henry Story <[hidden email]>; HTTP Working Group <[hidden email]>
Subject: Re: Client Certificates - re-opening discussion

 

 

 

On Sat, Sep 19, 2015 at 12:18 AM, Stefan Eissing <[hidden email]> wrote:


> Am 18.09.2015 um 22:57 schrieb Ilari Liusvaara <[hidden email]>:
>
>> On Fri, Sep 18, 2015 at 01:48:50PM -0700, Eric Rescorla wrote:
>>> On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <[hidden email]> wrote:
>>>
>>> Hi Henry,
>>>
>>> Thanks, but this is a much more narrowly-scoped discussion -- how to make
>>> client certs as they currently operate work in HTTP/2.
>>
>>
>> Is this a question about HTTP/2's limitations versus HTTP/1.1 or about
>> deficiencies
>> in HTTP/1.1 that HTTP/2 has not fixed?
>
> I think this is about the extra limitations of HTTP/2 regarding client
> authentication caused by major design differences between HTTP/1.1 and
> HTTP/2.
>
> Client certs in HTTP/1.1 aren't too great, but at least those don't
> seem to even remotely have the same problems as client certs in HTTP/2
> (especially when in web environment).

Just to have everyone on the same page. The problems - as we see them in httpd - are

1. http/1.1 requests may trigger client certs which may require renegotiation. Processing is no longer  sequential with http/2, causing conflicts.

 

Well, presently renegotiation is illegal in HTTP/2, so this is a non-problem.

 

However, I suppose if we land TLS 1.3 PR#209 it will come back.

 

-Ekr

 

Even if mutexed what does connection state and h2 stream have to say to each other and for how long?

2. connection reuse for different hosts is much more likely as a lot of sites have a long list of subjectAltNames. That raises the likelihood of conflicts as described above.

Any advice on how to address this in an interoperable way is appreciated.

//Stefan

 

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Yoav Nir-3
Hi, Mike

On Sep 20, 2015, at 1:10 AM, Mike Bishop <[hidden email]> wrote:

Kind of a non-problem, but it’s also the problem itself.  The HTTP layer will call different APIs in TLS, but the API HTTP exposes (get client certificate) won’t necessarily change.

·        HTTP/1.x + TLS <=1.2 – Client certs work via renegotiation

·        HTTP/x + TLS 1.3 – Client certs work via new TLS feature that isn’t renegotiation

·        HTTP/2 + TLS 1.2 – How do client certs work?

 

It’s a time-scoped problem, since we hope everyone will eventually be on TLS 1.3, but it’s a nearly-universal problem at the moment.  There are many proposed kludges for HTTP/2 over TLS 1.2 in the meantime, but we need to find something with broader support than any idea currently has.


I’m not sure I see how PR #209 solves the issue.

HTTP/2 prohibited renegotiation because HTTP/2 is non-sequential. A bunch of requests may be in process and it is non-deterministic which responses will be generated before, during and after the client authentication. One resource might trigger the renegotiation, but several others might receive different responses based on whether or not the user is authenticated.

Now suppose we replace renegotiation with the mechanism proposed in PR #209. Some resource triggers the TLS layer, but instead of triggering a re-negotiation by sending a HelloRequest, it triggers client certificate authentication by sending a CertificateRequest. This is different in some senses: there is no change to the master secret; the old channel bindings are still valid; session keys are not replaced. I don’t see what difference this makes. The connection still changes from a state where the client is anonymous to a state where the client is authenticated. Requests sent by the client still may have been responded to before, during or after the change of state. 

Maybe I’m missing something, but I don’t see what #209 does that renegotiation did not.

Yoav


Reply | Threaded
Open this post in threaded view
|

RE: Client Certificates - re-opening discussion

Mike Bishop

Better than renegotiation?  Nothing – which is the point.  Renegotiation worked, and our first step is parity with downlevel.  Renegotiation, however, attempted to bring many functions together, some of which made the TLS WG uncomfortable.  This PR creates a more scoped feature targeted at only the presentation of client credentials to the server, which is the feature we actually need.

 

It sounds like, in part, we have different understandings of why renegotiation was prohibited in the first place.  You argue it was prohibited because there’s some inherent indeterminacy, particularly if the application layer doesn’t stall.  I’d argue that that indeterminacy can and should be handled by the application that knows what resources care about the client’s identity and which don’t.

 

If multiple requests cause the server application to query the HTTP layer for the client’s certificate, then all those requests will wait until the client authentication has completed, just as they would have on a non-multiplexed connection.  Where multiplexing adds a new wrinkle is that, under HTTP/1.1, those connections that didn’t require authentication would proceed without interruption until they’re used for a protected request.

 

Perhaps the fundamental question is, when does the client need to know that the server had seen the certificate prior to generating the response?  In HTTP/1.1 over TLS 1.x, it could know that the server had seen it, but couldn’t know whether the server cared.

 

From: Yoav Nir [mailto:[hidden email]]
Sent: Sunday, September 20, 2015 2:49 PM
To: Mike Bishop <[hidden email]>
Cc: Eric Rescorla <[hidden email]>; Stefan Eissing <[hidden email]>; Ilari Liusvaara <[hidden email]>; Mark Nottingham <[hidden email]>; Henry Story <[hidden email]>; HTTP Working Group <[hidden email]>
Subject: Re: Client Certificates - re-opening discussion

 

Hi, Mike

 

On Sep 20, 2015, at 1:10 AM, Mike Bishop <[hidden email]> wrote:

 

Kind of a non-problem, but it’s also the problem itself.  The HTTP layer will call different APIs in TLS, but the API HTTP exposes (get client certificate) won’t necessarily change.

·       HTTP/1.x + TLS <=1.2 – Client certs work via renegotiation

·       HTTP/x + TLS 1.3 – Client certs work via new TLS feature that isn’t renegotiation

·       HTTP/2 + TLS 1.2 – How do client certs work?

 

It’s a time-scoped problem, since we hope everyone will eventually be on TLS 1.3, but it’s a nearly-universal problem at the moment.  There are many proposed kludges for HTTP/2 over TLS 1.2 in the meantime, but we need to find something with broader support than any idea currently has.

 

I’m not sure I see how PR #209 solves the issue.

 

HTTP/2 prohibited renegotiation because HTTP/2 is non-sequential. A bunch of requests may be in process and it is non-deterministic which responses will be generated before, during and after the client authentication. One resource might trigger the renegotiation, but several others might receive different responses based on whether or not the user is authenticated.

 

Now suppose we replace renegotiation with the mechanism proposed in PR #209. Some resource triggers the TLS layer, but instead of triggering a re-negotiation by sending a HelloRequest, it triggers client certificate authentication by sending a CertificateRequest. This is different in some senses: there is no change to the master secret; the old channel bindings are still valid; session keys are not replaced. I don’t see what difference this makes. The connection still changes from a state where the client is anonymous to a state where the client is authenticated. Requests sent by the client still may have been responded to before, during or after the change of state. 

 

Maybe I’m missing something, but I don’t see what #209 does that renegotiation did not.

 

Yoav

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Yoav Nir-3

On Sep 21, 2015, at 6:30 AM, Mike Bishop <[hidden email]> wrote:

Better than renegotiation?  Nothing – which is the point.  Renegotiation worked, and our first step is parity with downlevel.


So if this working group rejected renegotiation (which worked), why would this new mechanism be acceptable?

Renegotiation, however, attempted to bring many functions together, some of which made the TLS WG uncomfortable.  This PR creates a more scoped feature targeted at only the presentation of client credentials to the server, which is the feature we actually need.


That’s a good thing, but IMO it doesn’t matter to httpbis.

It sounds like, in part, we have different understandings of why renegotiation was prohibited in the first place.  You argue it was prohibited because there’s some inherent indeterminacy, particularly if the application layer doesn’t stall.  I’d argue that that indeterminacy can and should be handled by the application that knows what resources care about the client’s identity and which don’t.


Having the application layer stall doesn’t help. The client requests resources A, B, and C. Resource B requires client authentication. By the time the application stalls, waiting for the client authentication, resources A and C may not have been noticed, or the requests may have been serviced, with A and C in a buffer waiting to be encrypted, or the requests may have been serviced and encrypted and on the way back to the client. A and C may be received in the authenticated or the non-authenticated context. Imagine, for example, that A is a bit of HTML that says “Hello, guest” in the unauthenticated context, or “Hello, Mike” after authentication. You can get the certificate picker and still see the “Hello, guest” on the page.

What’s more, I think HTTP authentication has the same issue. If one request gets processed and generates a 401 with WWW-Authenticate, other resources may or may not have been serviced. You can fix this by carefully designing the application so that you don’t load resources that are different based on state at the same time as the authentication is going on.

If multiple requests cause the server application to query the HTTP layer for the client’s certificate, then all those requests will wait until the client authentication has completed, just as they would have on a non-multiplexed connection.  Where multiplexing adds a new wrinkle is that, under HTTP/1.1, those connections that didn’t require authentication would proceed without interruption until they’re used for a protected request.

 

Perhaps the fundamental question is, when does the client need to know that the server had seen the certificate prior to generating the response?  In HTTP/1.1 over TLS 1.x, it could know that the server had seen it, but couldn’t know whether the server cared.


Does it matter for the client?

Yoav

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Henry Story-4
First I am really thankful for this discussion as  I had not been able
to understand what exactly the problem with client certs and 
HTTP/2.0 were. This is very helpful.

On 21 Sep 2015, at 07:09, Yoav Nir <[hidden email]> wrote:


On Sep 21, 2015, at 6:30 AM, Mike Bishop <[hidden email]> wrote:

Better than renegotiation?  Nothing – which is the point.  Renegotiation worked, and our first step is parity with downlevel.

So if this working group rejected renegotiation (which worked), why would this new mechanism be acceptable?

Renegotiation, however, attempted to bring many functions together, some of which made the TLS WG uncomfortable.  This PR creates a more scoped feature targeted at only the presentation of client credentials to the server, which is the feature we actually need.

That’s a good thing, but IMO it doesn’t matter to httpbis.

It sounds like, in part, we have different understandings of why renegotiation was prohibited in the first place.  You argue it was prohibited because there’s some inherent indeterminacy, particularly if the application layer doesn’t stall.  I’d argue that that indeterminacy can and should be handled by the application that knows what resources care about the client’s identity and which don’t.

Having the application layer stall doesn’t help. The client requests resources A, B, and C. Resource B requires client authentication. By the time the application stalls, waiting for the client authentication, resources A and C may not have been noticed, or the requests may have been serviced, with A and C in a buffer waiting to be encrypted, or the requests may have been serviced and encrypted and on the way back to the client. A and C may be received in the authenticated or the non-authenticated context. Imagine, for example, that A is a bit of HTML that says “Hello, guest” in the unauthenticated context, or “Hello, Mike” after authentication. You can get the certificate picker and still see the “Hello, guest” on the page.

What’s more, I think HTTP authentication has the same issue. If one request gets processed and generates a 401 with WWW-Authenticate, other resources may or may not have been serviced. You can fix this by carefully designing the application so that you don’t load resources that are different based on state at the same time as the authentication is going on.

This seems to show that this is not a client certificate problem but another problem.

In a data driven web, where the client generates the page in Single Pages apps (SPA), the 
identifying  information about the user would be in a seperate resource. As that information 
became available to the SPA would redraw itself, to take that into account.

The way you state the problem it seems to be related to a resource being both public
and protected simultaneously, with the protected version of the resource returning more 
information than the public one. How would a web server indicate to the client that
it should re-download certain resources that now contain more information? Is this even
good practice?  Those are interesting questions.


If multiple requests cause the server application to query the HTTP layer for the client’s certificate, then all those requests will wait until the client authentication has completed, just as they would have on a non-multiplexed connection.  Where multiplexing adds a new wrinkle is that, under HTTP/1.1, those connections that didn’t require authentication would proceed without interruption until they’re used for a protected request.

makes sense to me.

 

Perhaps the fundamental question is, when does the client need to know that the server had seen the certificate prior to generating the response?  In HTTP/1.1 over TLS 1.x, it could know that the server had seen it, but couldn’t know whether the server cared.

Does it matter for the client?

Yoav



Social Web Architect

Reply | Threaded
Open this post in threaded view
|

Re: Client Certificates - re-opening discussion

Kyle Rose
In reply to this post by Yoav Nir-3
> Having the application layer stall doesn’t help. The client requests
> resources A, B, and C. Resource B requires client authentication. By the
> time the application stalls, waiting for the client authentication,
> resources A and C may not have been noticed, or the requests may have been
> serviced, with A and C in a buffer waiting to be encrypted, or the requests
> may have been serviced and encrypted and on the way back to the client. A
> and C may be received in the authenticated or the non-authenticated context.
> Imagine, for example, that A is a bit of HTML that says “Hello, guest” in
> the unauthenticated context, or “Hello, Mike” after authentication. You can
> get the certificate picker and still see the “Hello, guest” on the page.
>
> What’s more, I think HTTP authentication has the same issue. If one request
> gets processed and generates a 401 with WWW-Authenticate, other resources
> may or may not have been serviced. You can fix this by carefully designing
> the application so that you don’t load resources that are different based on
> state at the same time as the authentication is going on.
>
> If multiple requests cause the server application to query the HTTP layer
> for the client’s certificate, then all those requests will wait until the
> client authentication has completed, just as they would have on a
> non-multiplexed connection.  Where multiplexing adds a new wrinkle is that,
> under HTTP/1.1, those connections that didn’t require authentication would
> proceed without interruption until they’re used for a protected request.

How did this work in practice with HTTP/1.1, with browsers having
multiple simultaneous connections open to the same server?

If I had to guess, I'd say that the primary resource requiring
authentication was typically the root HTML for a page, which would
then of course stall every subsequent request for subresources without
any specific support required in the client: neither multiplexing H2
nor simultaneous HTTP/1.1 connections would be subject to a race
condition in this case, requests for the URLs from previously-loaded
pages that vary on authentication notwithstanding. Otherwise, I'm
guessing the user would have a sub-par UX (e.g., multiple certificate
chooser dialogs).

Kyle

12