Report on preliminary decision on TLS 1.3 and client auth

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
30 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3

The minutes of the TLS interim have been posted. Some decisions regard client authentication were made.

https://www.ietf.org/proceedings/interim/2015/09/21/tls/minutes/minutes-interim-2015-tls-3

Here is a summary of the applicable pieces, plus what I options it provides HTTP/2...

(Caveat here: aspects of this could change if new information is presented, but it seems unlikely that there will be changes that will affect the core decisions.)

The big change is that a server can request client authentication at any time. A server may also make multiple such requests. Those multiple requests could even be concurrent.

The security claims associated with client authentication require more analysis before we can be certain, but the basic idea is that authentication merely provides the proof that a server needs to regard the entire session to be authentic. In other words, client authentication will apply retroactively. This could allow a request sent prior to authentication to be considered authenticated. This is a property that is implicitly relied on for the existing renegotiation cases and one that we might want to exploit.

Each certificate request includes an identifier that allows it to be correlated with the certificate that is produced in response. This also allows for correlating with application context. This is what I think that we can use to fix HTTP/2.

Clients cannot spontaneously authenticate, which invalidates the designs I have proposed, however, the basic structure is the basis for the first option that I will suggest.


Option 1 uses a new authentication scheme. A request that causes a server to require a client certificate is responded to with a 4xx response containing a ClientCertificate challenge. That challenge includes an identifier.  The server also sends - at the TLS layer - a CertificateRequest containing the same identifier, allowing the client to correlate it's HTTP request with the server's CertificateRequest.

Client@HTTP/2:
HEADERS
  :method = GET ...

Server@HTTP/2:
HEADERS
  :status = 401
  authorization = ClientCertificate req="option 1"

Server@TLS:
CertificateRequest { id: "option 1" }

Client@TLS:
Certificate+CertificateVerify { id: "option 1", certificates... }

Client@HTTP/2:
HEADERS
  :method = GET ...

Server@HTTP/2:
HEADERS
  :status = 200


Option 2 aims to more closely replicate the experience we get from renegotiation in HTTP/1.1 + TLS <= 1.2.  Rather than rejecting the request, the server sends an HTTP/2 frame on the stream to indicate to the client to expect a CertificateRequest. That frame includes the identifier.

Client@HTTP/2:
HEADERS
  :method = GET ...

Server@HTTP/2:
EXPECT_AUTH
  id = option 2

Server@TLS:
CertificateRequest { id: "option 2" }

Client@TLS:
Certificate+CertificateVerify { id: "option 2", certificates... }

Server@HTTP/2:
HEADERS
  :status = 200

In this case, the server probably wants to know that the client is willing to respond to these requests, otherwise it will want to use HTTP_1_1_REQUIRED or 421.  So a companion setting to enable this is a good idea (the semantics of the setting that Microsoft use for renegotiation is pretty much exactly what we'd need).

I think that the first option has some architectural advantages, but that is all.  The latter more closely replicates what people do today and for that reason, I think that it is the best option.


As for how to implement this same basic mechanism in TLS 1.2, I have an idea that will work for either option, but it's a bit disgusting, so I'll save that for a follow-up email.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Amos Jeffries-2
On 24/09/2015 5:16 a.m., Martin Thomson wrote:

> The minutes of the TLS interim have been posted. Some decisions regard
> client authentication were made.
>
> https://www.ietf.org/proceedings/interim/2015/09/21/tls/minutes/minutes-interim-2015-tls-3
>
> Here is a summary of the applicable pieces, plus what I options it provides
> HTTP/2...
>
> (Caveat here: aspects of this could change if new information is presented,
> but it seems unlikely that there will be changes that will affect the core
> decisions.)
> The big change is that a server can request client authentication at any
> time. A server may also make multiple such requests. Those multiple
> requests could even be concurrent.
>
> The security claims associated with client authentication require more
> analysis before we can be certain, but the basic idea is that
> authentication merely provides the proof that a server needs to regard the
> entire session to be authentic. In other words, client authentication will
> apply retroactively. This could allow a request sent prior to
> authentication to be considered authenticated. This is a property that is
> implicitly relied on for the existing renegotiation cases and one that we
> might want to exploit.
>
> Each certificate request includes an identifier that allows it to be
> correlated with the certificate that is produced in response. This also
> allows for correlating with application context. This is what I think that
> we can use to fix HTTP/2.
>
> Clients cannot spontaneously authenticate, which invalidates the designs I
> have proposed, however, the basic structure is the basis for the first
> option that I will suggest.
>
>
> Option 1 uses a new authentication scheme. A request that causes a server
> to require a client certificate is responded to with a 4xx response
> containing a ClientCertificate challenge. That challenge includes an
> identifier.  The server also sends - at the TLS layer - a
> CertificateRequest containing the same identifier, allowing the client to
> correlate it's HTTP request with the server's CertificateRequest.
>
> Client@HTTP/2:
> HEADERS
>   :method = GET ...
>
> Server@HTTP/2:
> HEADERS
>   :status = 401
>   authorization = ClientCertificate req="option 1"
>
> Server@TLS:
> CertificateRequest { id: "option 1" }
>
> Client@TLS:
> Certificate+CertificateVerify { id: "option 1", certificates... }
>
> Client@HTTP/2:
> HEADERS
>   :method = GET ...
>
> Server@HTTP/2:
> HEADERS
>   :status = 200
>
>
> Option 2 aims to more closely replicate the experience we get from
> renegotiation in HTTP/1.1 + TLS <= 1.2.  Rather than rejecting the request,
> the server sends an HTTP/2 frame on the stream to indicate to the client to
> expect a CertificateRequest. That frame includes the identifier.
>
> Client@HTTP/2:
> HEADERS
>   :method = GET ...
>
> Server@HTTP/2:
> EXPECT_AUTH
>   id = option 2
>
> Server@TLS:
> CertificateRequest { id: "option 2" }
>
> Client@TLS:
> Certificate+CertificateVerify { id: "option 2", certificates... }
>
> Server@HTTP/2:
> HEADERS
>   :status = 200
>
> In this case, the server probably wants to know that the client is willing
> to respond to these requests, otherwise it will want to use
> HTTP_1_1_REQUIRED or 421.  So a companion setting to enable this is a good
> idea (the semantics of the setting that Microsoft use for renegotiation is
> pretty much exactly what we'd need).
>
> I think that the first option has some architectural advantages, but that
> is all.  The latter more closely replicates what people do today and for
> that reason, I think that it is the best option.
>
> As for how to implement this same basic mechanism in TLS 1.2, I have an
> idea that will work for either option, but it's a bit disgusting, so I'll
> save that for a follow-up email.
>


Option 1 re-introduces all the same problems NTLM and Negotiate have
over HTTP in the presence of proxy and gateway intermediaries. By using
single-connectin authorization over a multiplexed connection environment.

It would either need to be restricted to Proxy-Authorization or some
complex logic used to determine whether Authorization or
Proxy-Authorization was the proper response header.

Lets not go there.


Option 2 risks the same mess if the AUTH frame is defined end-to-end.
But a per-hop frame would work nicely as long as it is clear to server
implementers that intermediaries may be the source of the certificate.
Not some "user".


An option 3 might be to use a SETTINGS instead of dedicated AUTH frame.
So that the per-hop nature is made extra clear. That would also be more
backward compatible with older h2 implementations and work in with
clearing dynamic compression contexts at the same time as authenticating.


Amos


Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3
On 23 September 2015 at 19:02, Amos Jeffries <[hidden email]> wrote:
>
> Option 2 risks the same mess if the AUTH frame is defined end-to-end.
> But a per-hop frame would work nicely as long as it is clear to server
> implementers that intermediaries may be the source of the certificate.
> Not some "user".

This would naturally be hop-by-hop, by virtue of extensions being
hop-by-hop and by virtue of the setting that enables it also being
hop-by-hop.

> An option 3 might be to use a SETTINGS instead of dedicated AUTH frame.
> So that the per-hop nature is made extra clear. That would also be more
> backward compatible with older h2 implementations and work in with
> clearing dynamic compression contexts at the same time as authenticating.

SETTINGS wouldn't allow the server to correlate the CertificateRequest
with a specific request/response exchange.

Also, while I think of it, we should probably forbid the use of this
on server-initiated streams (i.e., with server push).  That could
cause problems.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Amos Jeffries-2
On 24/09/2015 3:41 p.m., Martin Thomson wrote:

> On 23 September 2015 at 19:02, Amos Jeffries wrote:
>>
>> Option 2 risks the same mess if the AUTH frame is defined end-to-end.
>> But a per-hop frame would work nicely as long as it is clear to server
>> implementers that intermediaries may be the source of the certificate.
>> Not some "user".
>
> This would naturally be hop-by-hop, by virtue of extensions being
> hop-by-hop and by virtue of the setting that enables it also being
> hop-by-hop.
>
>> An option 3 might be to use a SETTINGS instead of dedicated AUTH frame.
>> So that the per-hop nature is made extra clear. That would also be more
>> backward compatible with older h2 implementations and work in with
>> clearing dynamic compression contexts at the same time as authenticating.
>
> SETTINGS wouldn't allow the server to correlate the CertificateRequest
> with a specific request/response exchange.

Ah. Sorry I seem to have misunderstood yoru meaning of "provides the
proof that a server needs to regard the entire session to be authentic"
to mean the cert was connection-wide.

If it is stream-specific in terms of HTTP/2 streams rather than TLS
streams, then the frame as in option 2 should be okay. Option 1 still
has major issues with www-auth vs proxy-auth.

>
> Also, while I think of it, we should probably forbid the use of this
> on server-initiated streams (i.e., with server push).  That could
> cause problems.
>

I can see that as being a SHOULD NOT, or forbid on PUSH_PROMISE
specifically. But using a more general definitio like "server initiated"
may cause conflicts with the bi-directional h2 extension.

Amos


Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3
On 23 September 2015 at 20:56, Amos Jeffries <[hidden email]> wrote:
> If it is stream-specific in terms of HTTP/2 streams rather than TLS
> streams, then the frame as in option 2 should be okay. Option 1 still
> has major issues with www-auth vs proxy-auth.

Right.  To expand on the problem here, at least in the browser context
- and likely in other cases as well - it is important for the client
to be able to identify which request triggered the certificate
request.  If there are requests from multiple browser windows (or even
applications) sharing the same connection and a CertificateRequest
appears, the client needs to know where to show the associated UX, if
there is any.

I certainly agree about the e2e and hbh concerns.  An end-to-end
message would prevent the hop-by-hop TLS from being tweaked.

>> Also, while I think of it, we should probably forbid the use of this
>> on server-initiated streams (i.e., with server push).  That could
>> cause problems.
>>
>
> I can see that as being a SHOULD NOT, or forbid on PUSH_PROMISE
> specifically. But using a more general definitio like "server initiated"
> may cause conflicts with the bi-directional h2 extension.

mhm.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Ilari Liusvaara
In reply to this post by Martin Thomson-3
On Wed, Sep 23, 2015 at 10:16:42AM -0700, Martin Thomson wrote:

>
> The security claims associated with client authentication require more
> analysis before we can be certain, but the basic idea is that
> authentication merely provides the proof that a server needs to regard the
> entire session to be authentic. In other words, client authentication will
> apply retroactively. This could allow a request sent prior to
> authentication to be considered authenticated. This is a property that is
> implicitly relied on for the existing renegotiation cases and one that we
> might want to exploit.
>
> Each certificate request includes an identifier that allows it to be
> correlated with the certificate that is produced in response. This also
> allows for correlating with application context. This is what I think that
> we can use to fix HTTP/2.

How do you deal with server sending reauthentication request (even associated
with some stream) when client has unauthenticable streams (no credentials or
cross-origin coalescing)?

Reset the streams blocking authentication? Refuse identity change? Open
a new connection?

Also, what if after authentication no credentials request is made?
Refuse request with network error? Open a new connection?


With CANT+CARE, it was pretty obvious that client would just route requests
to connections (if needed, opening a new one) in order to deal with these
kind of issues.

> Clients cannot spontaneously authenticate, which invalidates the designs I
> have proposed, however, the basic structure is the basis for the first
> option that I will suggest.

>From some comments it seems "unsolicited client auth" isn't about CARE type
stuff but client just plain sending its CC/CCV mid-connection?


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Poul-Henning Kamp
In reply to this post by Amos Jeffries-2
--------
In message <[hidden email]>, Amos Jeffries writes:

>Ah. Sorry I seem to have misunderstood yoru meaning of "provides the
>proof that a server needs to regard the entire session to be authentic"
>to mean the cert was connection-wide.

I would like to remind people that, contrary to widespread assumptions,
HTTP doesn't have "sessions".

Sessions are typically implemented by mistaking (groups of) connections
for a session, or by means of opaque unstandardized cookies.

A client cert most naturally applies to the session between the
client and the server, no matter which connections and requests
that session might consist of.

But there is no way at the standardized protocol level to tell which
connections and requests belong to any particular sessions.

The only two architecturally clean solutions I can see are:

A)      Add the concept of sessions to HTTP, so we can tie the
        client cert to one of them.

B) Point people to the End-to-End Argument, and make the client
        sign each subsequent request with its cert.

A is at best a long term goal.  Probably worth persuing for this
and many other reasons, but unlikely to happen until HTTP/3.

B is interesting in that it is relatively straightforward, can be
applied to all versions of HTTP (if done right) and lays down
ground-work which can later be extended to offer integrity (in both
directions) without the excess baggage of secrecy.

There are some issue with B, in particularly the part about what
headers gets signed and what to do if proxies munge them along the
way.

I'd probably just let the signature enumerate which headers it signs,
and make it a policy issue which headers it is a good idea to sign.

In real life, the server would return a indication to the client
along the lines of "I'd like you to sign your headers, using a
cert matching this expression", and if it does, fine, if not
the server will have to check policy for what to do.

--
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[hidden email]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Yoav Nir-3

> On Sep 25, 2015, at 12:18 PM, Poul-Henning Kamp <[hidden email]> wrote:
>
> --------
> In message <[hidden email]>, Amos Jeffries writes:
>
>> Ah. Sorry I seem to have misunderstood yoru meaning of "provides the
>> proof that a server needs to regard the entire session to be authentic"
>> to mean the cert was connection-wide.
>
> I would like to remind people that, contrary to widespread assumptions,
> HTTP doesn't have "sessions".
>
> Sessions are typically implemented by mistaking (groups of) connections
> for a session, or by means of opaque unstandardized cookies.

Why do you call cookies unstandardized?

Yoav


Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Poul-Henning Kamp
--------
In message <[hidden email]>, Yoav Nir writes:

>
>> On Sep 25, 2015, at 12:18 PM, Poul-Henning Kamp <[hidden email]> wrote:
>>
>> --------
>> In message <[hidden email]>, Amos Jeffries writes:
>>
>>> Ah. Sorry I seem to have misunderstood yoru meaning of "provides the
>>> proof that a server needs to regard the entire session to be authentic"
>>> to mean the cert was connection-wide.
>>
>> I would like to remind people that, contrary to widespread assumptions,
>> HTTP doesn't have "sessions".
>>
>> Sessions are typically implemented by mistaking (groups of) connections
>> for a session, or by means of opaque unstandardized cookies.
>
>Why do you call cookies unstandardized?

Cookies are standardized just fine.

What I tried to say above is that we don't know which cookie
identifies the session.

--
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[hidden email]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3
On 25 September 2015 at 03:14, Poul-Henning Kamp <[hidden email]> wrote:
> What I tried to say above is that we don't know which cookie
> identifies the session.

That's definitely true.  Cookies are a pretty crude tool for something
like this.

I think that your general observation about client certificates is
overwhelmingly true.  On the web at least, I'm seeing a general trend
away from using the TLS layer to authenticate clients.  If cookies are
crude, client certificates make them look like a picture of
sophistication by comparison.  As you say, they are a poor fit for
both the protocol and the architecture.

What I neglected to mention earlier is that client certificate
mechanism that was being added was viewed more as a necessary evil
than an important feature.  No one liked having to do this, but as
Mark pointed out, there are far more people relying on having the
functionality than we previously thought.

I'd like to find other solutions for the use cases that drive this,
but the view was that we still needed something like this so that we
don't strand those users on old protocols.  We don't have to *like* it
though.

There was strong agreement that this feature would be accompanied by a
prominent and severe admonishment against using it.  I definitely want
to talk about what the alternatives look like, but perhaps we should
start a separate thread on that subject.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Poul-Henning Kamp
--------
In message <[hidden email]>
, Martin Thomson writes:

>On 25 September 2015 at 03:14, Poul-Henning Kamp <[hidden email]> wrote:
>> What I tried to say above is that we don't know which cookie
>> identifies the session.
>
>[...]
>
>What I neglected to mention earlier is that client certificate
>mechanism that was being added was viewed more as a necessary evil
>than an important feature.  No one liked having to do this, but as
>Mark pointed out, there are far more people relying on having the
>functionality than we previously thought.

I think in the current climate, we have a lot of lattitude for
doing things right, and telling people why they should migrate
to something safer, so we should seriously consider skipping
the workarounds and aim for something that will hold up well
under pressure.


--
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[hidden email]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Kyle Rose
In reply to this post by Martin Thomson-3
> There was strong agreement that this feature would be accompanied by a
> prominent and severe admonishment against using it.  I definitely want
> to talk about what the alternatives look like, but perhaps we should
> start a separate thread on that subject.

For a variety of reasons, certificate-based browser authentication is
not going away, so in light of this I would be very interested in
helping formulate a replacement either at the protocol layer or at the
application layer with the proper hooks to allow for apps to present a
good UI to the user in ambiguous cases.

In the meantime, the options presented seem no worse than what we're
doing today with HTTP/1.1 and TLS <= 1.2, and clearly better than the
alternatives in the sense that they won't require clients to downgrade
to 1.1 for what is a "normal" case in a lot of places.

Kyle

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3
In reply to this post by Poul-Henning Kamp
On 25 September 2015 at 10:20, Poul-Henning Kamp <[hidden email]> wrote:
> I think in the current climate, we have a lot of lattitude for
> doing things right, and telling people why they should migrate
> to something safer, so we should seriously consider skipping
> the workarounds and aim for something that will hold up well
> under pressure.


I want to do that to, but if that generates too much incentive to
remain on old protocols, I don't think that is the only thing we can
do.

Note that there are a lot of alternatives out there already.  For
instance, the widely deployed OAuth-based systems.  There are some
small differences in their security properties, which might be
critical.

However, I confess that I don't know whether that is a consideration
as much as pure inertia.  Maybe application developers that use client
certificates really like the fact that they have terrible privacy
characteristics.

Either way, I don't believe that we get to play the dictator here.
People will do what they feel that they need to.  If we don't help,
they will implement options that are even worse than those that I
described.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Willy Tarreau-3
In reply to this post by Martin Thomson-3
On Fri, Sep 25, 2015 at 10:08:50AM -0700, Martin Thomson wrote:
> I think that your general observation about client certificates is
> overwhelmingly true.  On the web at least, I'm seeing a general trend
> away from using the TLS layer to authenticate clients.

On the *browser* web that's true, and the main reason is that if you
don't have your cert, you can't connect and you get a connect error
instead of a nice page delivered by the application proposing you to
regenerate your cert. Note: some sites manage to get it right, but
it's really complicated because the application needs to be aware of
what happens at the transport layer and needs to trust itself as much
as the transport layer, which is hardly a good thing to do by todays
development standards... And keep in mind that certs are a pain to
manage. Client certs are even worse because you have to support them
at the edge and manage them in the backend from the application. In
the end it provides no better security than the weakest point : the
application, which may sometimes be forced to generate rogue certs
due to regular bugs, but with a longer effect since these certs can
be abused even after the application bugs are fixed!

Between reverse-proxies and servers, or between servers, client certs
are much more common and perfectly fit the purpose : guarantee to each
side that they're talking with whom they believe they're talking. For
example, you don't forward an online payment request from an application
server on DC A to a payment server on DC B without client auth, or you're
definitely seeking for trouble. Just like you may only accept incoming
connections from your CDN.

> I'd like to find other solutions for the use cases that drive this,
> but the view was that we still needed something like this so that we
> don't strand those users on old protocols.  We don't have to *like* it
> though.
>
> There was strong agreement that this feature would be accompanied by a
> prominent and severe admonishment against using it.  I definitely want
> to talk about what the alternatives look like, but perhaps we should
> start a separate thread on that subject.

We should always be careful not to make security look evil just because
it comes with privacy concerns. If I go to my bank and want to make a
wire transfer, I have to show my ID card. If at some point people feel
concerned that the person they're talking to suddenly knows their name
and that it's a privacy concern and that they'd rather not ask for the
ID card, I would feel much less safe because I would have a harder way
to prove I'm the one I'm claiming, and others could pretend to be me.

And if I had the choice between having to show my ID card to everyone
in the queue at the same time as the bank's employee, or murmuring a
secret word hoping noone else hears it, guess what ? I'd prefer to show
my ID card to everyone, because as long as I have it and I keep my face,
I'm sure to be the only one to be able to do this wire transfer, while
the secret word can leak and be reused (even by the employee after he
leaves the bank and ends up in the same queue as me).

There are many situations where identification and authentication are
required. And by doing so we have to disclose our identity. We just
have not to abuse this, and possibly remind those who request it that
those who are asked to provide it may feel uncomfortable and that
sometimes alternatives are just as good. But we should not make this
mechanism look evil because it does more good than bad.

Regards,
Willy


Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Poul-Henning Kamp
--------
In message <[hidden email]>, Willy Tarreau writes:

>Between reverse-proxies and servers, or between servers, client certs
>are much more common and perfectly fit the purpose : guarantee to each
>side that they're talking with whom they believe they're talking.

Which only works by defining "session" to be "connection".

--
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
[hidden email]         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Ilari Liusvaara
In reply to this post by Willy Tarreau-3
On Sat, Sep 26, 2015 at 08:37:38AM +0200, Willy Tarreau wrote:
>
> We should always be careful not to make security look evil just because
> it comes with privacy concerns. If I go to my bank and want to make a
> wire transfer, I have to show my ID card. If at some point people feel
> concerned that the person they're talking to suddenly knows their name
> and that it's a privacy concern and that they'd rather not ask for the
> ID card, I would feel much less safe because I would have a harder way
> to prove I'm the one I'm claiming, and others could pretend to be me.

IMO, there are two kinds of certs in web environment (service to
service and non-web client-to-server are different ballgames):

1) "global": Shared among all authorized origins.
- Breaks SOP, making these highly privileged.
- Not automatable given the privilege involved.
- Private parts on smartcards or softokens.
- Usually identifies user
- Serious privacy concerns (but sometimes needed).

2) "local": Single origin
- Respects SOP.
- Can be almost entierely automatic (relatively unprivileged).
- Webcrypto, FIDO, etc...
- Usually pseudonymous
- Privacy concerns on level of things like LocalStorage.


There is friction with HTTP/2 connection coalescing here:
- "Global": If connection is for origins A and B, even if cert is
  authorized for A, it might not be authorized for B.
- "Local": If connection is for origins A and B, there can't be
  any single cert for the connection.

Also, some requests can't be sent with client cert at all (cross-origin
non-credentials fetch() for instance), even if target origin has
associated client cert.


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Willy Tarreau-3
In reply to this post by Poul-Henning Kamp
On Sat, Sep 26, 2015 at 07:11:20AM +0000, Poul-Henning Kamp wrote:
> --------
> In message <[hidden email]>, Willy Tarreau writes:
>
> >Between reverse-proxies and servers, or between servers, client certs
> >are much more common and perfectly fit the purpose : guarantee to each
> >side that they're talking with whom they believe they're talking.
>
> Which only works by defining "session" to be "connection".

absolutely, but you know, I avoid to use the term "session" because
it means different things to different people.

willy


Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Willy Tarreau-3
In reply to this post by Ilari Liusvaara
On Sat, Sep 26, 2015 at 11:01:44AM +0300, Ilari Liusvaara wrote:
> There is friction with HTTP/2 connection coalescing here:
> - "Global": If connection is for origins A and B, even if cert is
>   authorized for A, it might not be authorized for B.

Note, connection coalescing can only be performed by an entity
having access to the cert, simply because HTTP passes *over*
the authenticated TLS connection. Thus when it can happen
(eg: reverse proxy, or CDN), it's the equipement's cert that
will be presented to the server.

However we still need to make it possible and standard to pass
the client-auth information *inside* HTTP so that each stream
can carry the relevant information. That's what many SSL gateways
do by adding X-SSL-whatever headers right now, and which could be
much cleaner in HTTP/2.

Regards,
Willy


Reply | Threaded
Open this post in threaded view
|

Difffent ways to authenticate (Was: Re: Report on preliminary decision on TLS 1.3 and client auth)

Ilari Liusvaara
In reply to this post by Martin Thomson-3
On Wed, Sep 23, 2015 at 08:41:24PM -0700, Martin Thomson wrote:

> On 23 September 2015 at 19:02, Amos Jeffries <[hidden email]> wrote:
> >
> > Option 2 risks the same mess if the AUTH frame is defined end-to-end.
> > But a per-hop frame would work nicely as long as it is clear to server
> > implementers that intermediaries may be the source of the certificate.
> > Not some "user".
>
> This would naturally be hop-by-hop, by virtue of extensions being
> hop-by-hop and by virtue of the setting that enables it also being
> hop-by-hop.

Thinking about problem space of this certificate authentication in
HTTP/2 (long, given that this is about just about every way I can
come up with, including ones I don't think work):

(The terminology choices are probably pretty horrible).


On high level, one needs mechanisms for two things:
1) Client to authenticate that it can represent given authority
   (anonymous authority can always be represented, also "authority"
   here has nothing to do with :authority pseudo-header).
2) Client to associate authority it can represent (does not guarantee
   the server will authorize the request!) with every request (HTTP
   is stateless by nature!).

There are essentially two ways to do the latter:

a) Select authority used by connection request is routed to/from
b) Select authority used by in-band indication.


If selecting 1), select authority by connection, the obvious choice
for authenticating the authority is TLS client certs. There remains
choice on how to handle the case where authority with no existing
connection exists:

c) Change authority of existing connection (open new connection if
   old authority is needed again).
d) Open a new connection with desired authority, possibly keeping
   the old one.

The problem with c) is that the existing connection can have active
transfers and can't change authority until those transfers either
end or are aborted (to do otherwise could very well create races
that are exploitable for privilege escalation). d) Creates more
connections, but the number should remain fairly limited (much
less than HTTP/1.1 in the same situation).

I think d) is clearer of those two (it is essentially what
CANT+CARE(?) proposal was).


If selecting 2), select authority by in-band indication, the TLS
client certs don't work, because one needs to authenticate multiple
authorities on one connection, and there can be only one TLS
client cert.

Thus the certificate sending and verification needs to be implemented
at HTTP/2 level. The main choices are:

e) PUT/POST special resource
f) New HTTP verb
g) New HTTP/2 frame type.

Now, this kind of authentication is inherently hop-by-hop and could
target destination other than origin, so e) doesn't actually work
properly. HTTP verbs could be useful if one wanted to also make the
mechanism also work for HTTP/1.1, but as we see when considering
request authority designation, this type of mechanism doesn't work
properly in HTTP/1.1. This leaves new HTTP/2 frame type.

The choices for how the in-band indication for request authority
is done are:

h) New pseudo-header
i) New header
j) New HEADERS flag + field
k) New HTTP/2 frame that changes authority newly opened streams
   get.
l) SETTINGS that sets the authority for newly opened streams.


h) Doesn't work: HTTP/1.1 doesn't have pseudo-headers and HTTP/2
forbids defining new ones. i) Doesn't work either, since the mechanism
is hop-by-hop by nature and HTTP/2 does not have per-hop headers (other
than one or two special cases).

This leaves j), k) and l), which both seem workable (but HTTP/2-
specfic). In k), the frame could theoretically be combined with the
frame that shows the client can represent the authority, but in
practice this might be a bad idea due to bandwidth usage (certificates
might not be exactly small and changes might not be rare).

k) and l) need to be synchronous against stream opens (SETTINGS is
already synchronous against everything it can appear in middle of)
so to avoid race conditions resulting server misinterpretting
client request.


For reference, the SPDY client auth was equivalent to g) and j)
(frame type for showing client can represent authority and header
frame field for authority designation).

For in-band operation, I think either g)+j) or g)+l) is the
cleanest (the remaining one seems quite odd to me), with maybe a
slight preference for g)+j).

As note: Originally I had h)-k), but then noticed that the authority
designation fits in SETTINGS value (even 32 bits is overkill) so I
added l) to the list.


Finally, some considerations on adding HTTP-level certificate
verification mechanism:
- Only signature certificates work, but fortunately non-signature
  certificates are close to non-existent.
- Use channel bindings to bind to lower layer. Make sure those
  bindings are actually bindings (i.e. nonces, e.g. TLS THS attack)!
- The data to be signed should include the lower layer name, so if
  the thing runs on top of multiple lower layer protocols (HTTPS
  definition says "secure transport" not "TLS"!) one doesn't get
  cross-protocol attacks.
- Sign the certificate too, some signature schemes don't even
  bind the key properly and nothing binds certificate.
- One probably wants to keep support for crap like (non-EC) DSA or
  SHA-1 (or worse, MD5!) out entierely.
- Copying TLS 1.3 client signature format (but with different
  context) might be a good idea.


> Also, while I think of it, we should probably forbid the use of this
> on server-initiated streams (i.e., with server push).  That could
> cause problems.

I think reasonable way to handle authority of server pushed streams
would be for those to inherit authority of the associated stream.


-Ilari

Reply | Threaded
Open this post in threaded view
|

Re: Report on preliminary decision on TLS 1.3 and client auth

Martin Thomson-3
In reply to this post by Martin Thomson-3
On 23 September 2015 at 10:16, Martin Thomson <[hidden email]> wrote:
> Here is a summary of the applicable pieces, plus what I options it provides
> HTTP/2...

With the help of Mike Bishop [7], I've just submitted a draft that
describes option 2 in more detail, including something for TLS 1.2.

  https://tools.ietf.org/html/draft-thomson-http2-client-certs-00

I think that this is the best of all the bad options available to us.
In an ideal world, I think that I would prefer to kill this feature,
but we tried that once already and it wasn't working so well.  So we
this is plan B.

The TLS 1.2 option requires a new TLS extension.  If we think that
this is a good idea, we'll have to coordinate with the TLS working
group.

--Martin

[7] Mike is on vacation, and I did make a few changes without his
approval, so I'll have to ask forgiveness if I made a mistake...  In
other words, all the blame is mine, and the credit Mike's.

12