2 questions

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
83 messages Options
12345
Reply | Threaded
Open this post in threaded view
|

2 questions

Glen
Hi,

I have 2 questions, if I may.

1. What were the reasons for HTTP/2 not requiring TLS?

Is there a significant performance consideration, is it related to the cost of certificates (which is now fairly low or even free), or are there other technical reasons?

It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.

Also, at least 2 of the major browser vendors have said that they won't be supporting HTTP/2 without TLS, so surely no one is going to want to run their website without it?

2. Are the BREACH and CRIME exploits still applicable, especially with regard to content (body) compression? If so, does that mean that it's not possible to compress content (with gzip, for example) and still maintain security?

Please respond as if I were a layman, as my knowledge on these subjects is somewhat limited.

Thanks.



Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Yoav Nir-3

> On Mar 28, 2015, at 5:43 PM, Glen <[hidden email]> wrote:
>
> Hi,
>
> I have 2 questions, if I may.
>
> 1. What were the reasons for HTTP/2 not requiring TLS?
>
> Is there a significant performance consideration, is it related to the cost of certificates (which is now fairly low or even free), or are there other technical reasons?
>
> It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.
>
> Also, at least 2 of the major browser vendors have said that they won't be supporting HTTP/2 without TLS, so surely no one is going to want to run their website without it?

Right now, about a third of web requests use TLS. Clearly there is a constituency for the web in the clear, although there is a definite trend towards more TLS. If HTTP/2 is supposed to replace HTTP/1 entirely, it should support both. Two vendors said they would not support, one said they would. Using the UPGRADE mechanism, an in-the-clear website can support both HTTP/1 and HTTP/2.

As you say, it would be nice if the web was secure by default, but it is not up to you or me to force the web in that direction, or to withhold better performance in HTTP until they fit my definition of “nice”. This is the first rule of the Internet: you are so not in charge.

TLS has a significant cost in processing power, and there are a few cases where its use is inappropriate. Those cases can probably be served nicely by HTTP/1, but we aim to replace HTTP/1.

> 2. Are the BREACH and CRIME exploits still applicable, especially with regard to content (body) compression? If so, does that mean that it's not possible to compress content (with gzip, for example) and still maintain security?

CRIME was specific to retrieving cookie information by having a shared compressed state for resource name (which was under attacker control) and the cookie (which was constant).
HTTP/2 does not compress headers like that, so this is gone.

BEAST relies on content encryption, and on the attacker being able to inject stuff into the response, discovering pieces of static content such as CSRF tokens. There are ways to foil this attack that worked for HTTP/1, and they should apply similarly to HTTP/2. HTTP/2 itself does nothing to solve this.

HTH

Yoav


Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Cory Benfield
In reply to this post by Glen

> On 28 Mar 2015, at 14:43, Glen <[hidden email]> wrote:
>
> 1. What were the reasons for HTTP/2 not requiring TLS?

The shortest answer to this is that there was not much extra cost in allowing plaintext HTTP/2, and it was requested by several WG members for specific use cases where TLS may not be appropriate.

In practice, most of HTTP/2 in the open web will be deployed using TLS because by and large plaintext intermediaries are likely to misunderstand or mangle HTTP/2. Chrome and Firefox have no plans to support HTTP/2 in plaintext, which in practice means most websites won’t bother either.

> It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.

We are. =) Check out the opportunistic encryption draft[0] for examples of how we’re moving in that direction. Firefox already supports this draft, so websites can today start offering opportunistic HTTP-over-TLS if they would like to.

> 2. Are the BREACH and CRIME exploits still applicable, especially with regard to content (body) compression? If so, does that mean that it's not possible to compress content (with gzip, for example) and still maintain security?

Yes with regard to body compression, no with regard to headers. As I understand it this can be somewhat mitigated by the use of padding in HTTP/2, but it cannot be entirely removed.

It is possible to compress content without harming security *in certain cases* with well-crafted algorithms (see HPACK), but it may not be possible to do it with gzip. I’m not an expert in this area, so I won’t say more: I’ll let someone who knows more than me dive in.

Cory

[0]: https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-01
Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Constantine A. Murenin
In reply to this post by Glen
On 2015-03-28 7:43, Glen wrote:
> 1. What were the reasons for HTTP/2 not requiring TLS?
>
> Is there a significant performance consideration, is it related to the cost of certificates (which is now fairly low or even free), or are there other technical reasons?

This is incorrect.  The cost of certificates for webmasters is not
"fairly low or even free".

If you have one single domain and you disregard the opportunistic costs
you have to repeatedly endure in order to renew the certificate at least
once per year (for the rest of the life of the web-site), sure, the cost
may indeed be "fairly low or even free".

However, that is not the case if you have a few dozen domains (or even
subdomains), have had all of them on a single IPv4 address prior to the
HTTPS considerations, have requirements to support fairly recent
hardware with Android 2.3 (which has no SNI), want all of your users,
including Android 2.x ones, to be able to navigate to your web-site when
clicking the (https://) links posted outside of your control etc.

Think of all the consumer electronic devices like the 15 USD 802.11n
wireless routers -- who's going to be paying for their certificates?
Who will be renewing them every year at the "fairly low or even free" cost?

 > It would be nice if the web was just "secure by default", and I would
have thought that now would be the right time to move in that direction.

Yes, but mandating a mandatory "https://" address scheme is not a
solution.  As has been mentioned, Opportunistic Encryption through the
"http://" address scheme is what would help here instead.

Cheers,
Constantine.

Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Matthew Kerwin
In reply to this post by Glen

On 29 March 2015 at 00:43, Glen <[hidden email]> wrote:
Hi,

I have 2 questions, if I may.

1. What were the reasons for HTTP/2 not requiring TLS?

[...]


It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.


It's worth remembering that HTTP also exists outside the web. I know this is the *I*ETF, and we're specifying *internet* standards, but it behooves us to think outside the big grey cloud if we can do something that benefits the entire worldly computer community, even those parts not on the open net/web -- especially when it's a protocol as big as HTTP.

I'd rather not run TLS on my firewalled/airgapped home network when there's no real reason, especially if that required an insecure cert to be firm-coded into the web server in my intelligent switch, or my printer, or my smart-fridge (if I had one of those). The counter-argument was that I could just use HTTP/1 there, but that's either lame ("H2 isn't as useful as HTTP/1") or snobbish ("you're not good enough to use H2"), depending on how you interpret it. It would also disappoint me if I were to take part in the WG and help (in a small way) to define this awesome new protocol, and even work on my own implementation, only to discover that I couldn't use it in some circumstances.

And on costs: I'm personally not keen on paying extra (ongoing) for my web hosting to have a unique IP address, and then pay every year for a SAN certificate for my vhosts (I'd need to cover both foo.net and www.foo.net at the least).

Those were my main motivations for pushing back. And as others have said, there are other ways to get "secure by default" than requiring "TLS everywhere."

​Cheers
--
  Matthew Kerwin
  http://matthew.kerwin.net.au/
Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Walter H.
In reply to this post by Cory Benfield
Hello,

On 28.03.2015 22:36, Cory Benfield wrote:
>> On 28 Mar 2015, at 14:43, Glen<[hidden email]>  wrote:
>>
>> 1. What were the reasons for HTTP/2 not requiring TLS?
> The shortest answer to this is that there was not much extra cost in allowing plaintext HTTP/2, and it was requested by several WG members for specific use cases where TLS may not be appropriate.
these use cases are any websites for the public without any access
restrictions ...
> In practice, most of HTTP/2 in the open web will be deployed using TLS
the wrong way ...
> Chrome and Firefox have no plans to support HTTP/2 in plaintext, ...
this doesn't make any sense, because in case every website is encrypted
the sensitivity for invalid x509 certificates becomes less ...
and so it makes it easier faking banking sites - the most sensitive part
of encrypted websites;
>> It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.
> We are. =) Check out the opportunistic encryption draft[0] for examples of how we’re moving in that direction. Firefox already supports this draft, so websites can today start offering opportunistic HTTP-over-TLS if they would like to.
as said above: the wrong way;

just think of the fact why transports of money are escorted by police
and not everything else, too.

Greetings,
Walter



smime.p7s (7K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Walter H.
In reply to this post by Constantine A. Murenin
On 29.03.2015 03:19, Constantine A. Murenin wrote:

> On 2015-03-28 7:43, Glen wrote:
>> 1. What were the reasons for HTTP/2 not requiring TLS?
>>
>> Is there a significant performance consideration, is it related to
>> the cost of certificates (which is now fairly low or even free), or
>> are there other technical reasons?
>
> This is incorrect.  The cost of certificates for webmasters is not
> "fairly low or even free".
>
In fact they are fairly low or even free, because nobody tells you
buying at the most expensive dealer ;-)

just try e.g. StartCom ;-)
> Think of all the consumer electronic devices like the 15 USD 802.11n
> wireless routers -- who's going to be paying for their certificates?
any cheap routing box, either with WLAN or not does use self-signed
certificates; and business environments have different use cases and/or
hardware;
and there they can have their own CA, too ...

> Yes, but mandating a mandatory "https://" address scheme is not a
> solution.
use TLS with the address scheme "https://", and
>   As has been mentioned, Opportunistic Encryption through the
> "http://" address scheme is what would help here instead.
not any encryption with the "http://" address scheme;

you don't sell cows as pigs, do you;

Greetings,
Walter



smime.p7s (7K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

RE: 2 questions

Mike Bishop
You're skipping the discussion about why price of the cert is not the cost of running TLS.  There's admin overhead in renewing the cert for each domain, there's network infrastructure overhead in providing each domain a unique IP address (because you can't guarantee every client supports SNI, much as we'd like to), and that additional network infrastructure cost means hosting becomes more expensive.  Free certs don't mean free TLS, though obviously they're a nice step in that direction.

But fundamentally, the argument was that if HTTP/2 needed to cover the same scenarios as HTTP/1.1, that set of scenarios included traffic that, by operator choice for whatever reason, is not encrypted.  We're not trying to judge why the operator does it that way, and there will be obvious practical barriers to using HTTP/2 across the internet in plaintext for a while, but the scenario exists and was in-charter, so we continue to support it.  (The same reason we didn't take beneficial-but-breaking changes to Cookies or other semantics, for example.)

-----Original Message-----
From: Walter H. [mailto:[hidden email]]
Sent: Sunday, March 29, 2015 4:35 AM
To: Constantine A. Murenin
Cc: [hidden email]
Subject: Re: 2 questions

On 29.03.2015 03:19, Constantine A. Murenin wrote:

> On 2015-03-28 7:43, Glen wrote:
>> 1. What were the reasons for HTTP/2 not requiring TLS?
>>
>> Is there a significant performance consideration, is it related to
>> the cost of certificates (which is now fairly low or even free), or
>> are there other technical reasons?
>
> This is incorrect.  The cost of certificates for webmasters is not
> "fairly low or even free".
>
In fact they are fairly low or even free, because nobody tells you
buying at the most expensive dealer ;-)

just try e.g. StartCom ;-)
> Think of all the consumer electronic devices like the 15 USD 802.11n
> wireless routers -- who's going to be paying for their certificates?
any cheap routing box, either with WLAN or not does use self-signed
certificates; and business environments have different use cases and/or
hardware;
and there they can have their own CA, too ...

> Yes, but mandating a mandatory "https://" address scheme is not a
> solution.
use TLS with the address scheme "https://", and
>   As has been mentioned, Opportunistic Encryption through the
> "http://" address scheme is what would help here instead.
not any encryption with the "http://" address scheme;

you don't sell cows as pigs, do you;

Greetings,
Walter



Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Adrien de Croy
In reply to this post by Yoav Nir-3

I can buy that 1/3 of web requests use TLS.

however that does not apply to 1/3 of web sites using TLS.  Probably
just FB and google alone account for 1/3 of web requests.

There are surely hundreds of millions of sites.  That's at least tens of
millions of administrators who will need to take on the burden of making
TLS work on their site.  Many will not see any point in this.  Pretty
much all the sites that felt a need to deploy TLS will have already done
so, and the others will not thank the IETF or google or the chromium
project for attempting to force costs on them.

Please people do not conflate %-age of requests with %-age of sites.

------ Original Message ------
From: "Yoav Nir" <[hidden email]>
To: "Glen" <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Sent: 29/03/2015 10:32:09 a.m.
Subject: Re: 2 questions

>
>>  On Mar 28, 2015, at 5:43 PM, Glen <[hidden email]> wrote:
>>
>>  Hi,
>>
>>  I have 2 questions, if I may.
>>
>>  1. What were the reasons for HTTP/2 not requiring TLS?
>>
>>  Is there a significant performance consideration, is it related to
>>the cost of certificates (which is now fairly low or even free), or
>>are there other technical reasons?
>>
>>  It would be nice if the web was just "secure by default", and I would
>>have thought that now would be the right time to move in that
>>direction.
>>
>>  Also, at least 2 of the major browser vendors have said that they
>>won't be supporting HTTP/2 without TLS, so surely no one is going to
>>want to run their website without it?
>
>Right now, about a third of web requests use TLS. Clearly there is a
>constituency for the web in the clear, although there is a definite
>trend towards more TLS. If HTTP/2 is supposed to replace HTTP/1
>entirely, it should support both. Two vendors said they would not
>support, one said they would. Using the UPGRADE mechanism, an
>in-the-clear website can support both HTTP/1 and HTTP/2.
>
>As you say, it would be nice if the web was secure by default, but it
>is not up to you or me to force the web in that direction, or to
>withhold better performance in HTTP until they fit my definition of
>“nice”. This is the first rule of the Internet: you are so not in
>charge.
>
>TLS has a significant cost in processing power, and there are a few
>cases where its use is inappropriate. Those cases can probably be
>served nicely by HTTP/1, but we aim to replace HTTP/1.
>
>>  2. Are the BREACH and CRIME exploits still applicable, especially
>>with regard to content (body) compression? If so, does that mean that
>>it's not possible to compress content (with gzip, for example) and
>>still maintain security?
>
>CRIME was specific to retrieving cookie information by having a shared
>compressed state for resource name (which was under attacker control)
>and the cookie (which was constant).
>HTTP/2 does not compress headers like that, so this is gone.
>
>BEAST relies on content encryption, and on the attacker being able to
>inject stuff into the response, discovering pieces of static content
>such as CSRF tokens. There are ways to foil this attack that worked for
>HTTP/1, and they should apply similarly to HTTP/2. HTTP/2 itself does
>nothing to solve this.
>
>HTH
>
>Yoav
>
>


Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Cory Benfield
On 30 March 2015 at 04:15, Adrien de Croy <[hidden email]> wrote:

>
> I can buy that 1/3 of web requests use TLS.
>
> however that does not apply to 1/3 of web sites using TLS.  Probably just FB
> and google alone account for 1/3 of web requests.
>
> There are surely hundreds of millions of sites.  That's at least tens of
> millions of administrators who will need to take on the burden of making TLS
> work on their site.  Many will not see any point in this.  Pretty much all
> the sites that felt a need to deploy TLS will have already done so, and the
> others will not thank the IETF or google or the chromium project for
> attempting to force costs on them.

No-one is being *forced* to do anything. HTTP/1.1 is not going away.
If you dig back through the archives of this working group you'll
repeatedly find statements from almost all camps that HTTP/1.1 will be
around for the foreseeable future. Website owners that cannot set up
TLS will still find plenty of support for plaintext HTTP.

In this case I think Google and Firefox are probably right: HTTP/2 in
plaintext is likely to break frequently and mysteriously. This is
mostly because of intermediaries that believe they understand HTTP,
but don't do it very well (HAProxy is a good example I can think off
of the top of my head). These intermediaries are usually transparent
to HTTP/1.1 users, but they will likely break HTTP/2 traffic over port
80. Chrome and Firefox are therefore acting in the interest of both
users and operators when they forbid this kind of traffic. They're
saving your users from thinking your website is broken because their
ISP deployed some terrible intermediate 'service' that mangles HTTP/2
(consider Comcast's injection of HTTP headers, for example).

At this point in time, my HTTP/2 implementation does not support
plaintext HTTP/2. I will add support for it in the next few weeks, but
I do not expect it to work in the vast majority of cases, and will be
emitting warning logs to that effect.

Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Amos Jeffries-2
In reply to this post by Walter H.
On 30/03/2015 12:34 a.m., Walter H. wrote:

> On 29.03.2015 03:19, Constantine A. Murenin wrote:
>> On 2015-03-28 7:43, Glen wrote:
>>> 1. What were the reasons for HTTP/2 not requiring TLS?
>>>
>>> Is there a significant performance consideration, is it related to
>>> the cost of certificates (which is now fairly low or even free), or
>>> are there other technical reasons?
>>
>> This is incorrect.  The cost of certificates for webmasters is not
>> "fairly low or even free".
>>
> In fact they are fairly low or even free, because nobody tells you
> buying at the most expensive dealer ;-)
>
> just try e.g. StartCom ;-)

Tried that. Got a far as where their Terms and Conditions forbid me from
getting certs on behalf of my clients.


>> Think of all the consumer electronic devices like the 15 USD 802.11n
>> wireless routers -- who's going to be paying for their certificates?
> any cheap routing box, either with WLAN or not does use self-signed
> certificates; and business environments have different use cases and/or
> hardware;
> and there they can have their own CA, too ...

Go ahead. Try it. The modern browsers will all throw up confusing
looking popups about security thingys, red stop signs, unlocked
padlocks, etc in front of their users on each request using self-signed
certs and Chrome will not even permit the device control pages to be opened.

>
>> Yes, but mandating a mandatory "https://" address scheme is not a
>> solution.
> use TLS with the address scheme "https://", and
>>   As has been mentioned, Opportunistic Encryption through the
>> "http://" address scheme is what would help here instead.
> not any encryption with the "http://" address scheme;
>
> you don't sell cows as pigs, do you;

Exactly why http:// is used instead of https://.

Like selling a bull instead of a cow - has many great and similar uses
(meat, better workloads, etc), same breed of beast, but milk supply is
not in the marketing brochure.

If you want milk, pay more for a real cow. I just need something that
will pull a cart.

Dont cull the entire pig population because someone sold you a "beef"
sausage filled with pork. None can save you bacon after that.

Amos


Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Amos Jeffries-2
In reply to this post by Cory Benfield
On 30/03/2015 9:26 p.m., Cory Benfield wrote:

> On 30 March 2015 at 04:15, Adrien de Croy wrote:
>>
>> I can buy that 1/3 of web requests use TLS.
>>
>> however that does not apply to 1/3 of web sites using TLS.  Probably just FB
>> and google alone account for 1/3 of web requests.
>>
>> There are surely hundreds of millions of sites.  That's at least tens of
>> millions of administrators who will need to take on the burden of making TLS
>> work on their site.  Many will not see any point in this.  Pretty much all
>> the sites that felt a need to deploy TLS will have already done so, and the
>> others will not thank the IETF or google or the chromium project for
>> attempting to force costs on them.
>
> No-one is being *forced* to do anything. HTTP/1.1 is not going away.
> If you dig back through the archives of this working group you'll
> repeatedly find statements from almost all camps that HTTP/1.1 will be
> around for the foreseeable future. Website owners that cannot set up
> TLS will still find plenty of support for plaintext HTTP.

So your answer is "Just use HTTP/1.1" ?

Regardless of how long the transition would take one of the goals of
HTTP/2 is to replace it. *Any* network which is forced to stay with
HTTP/1 simply because of a missing protocol capability is a failure of
HTTP/2.

>
> In this case I think Google and Firefox are probably right: HTTP/2 in
> plaintext is likely to break frequently and mysteriously.

Guesses and supposition. Look at who you are throwing those arguments at
... the very authors of the major middleware implementations.

The Chrome choice was based on SPDY metrics IIRC. Which measured how
many connections over TLS were force to "just use HTTP/1.1" versus
allowed to use SPDY. That was done under conditions where *none* of the
middleware supported SPDY and TLS was able to supply a bypass.

Neither of those measurement conditions are true for HTTP/2. We major
middleware implementations authors participate in the WG and are
actively implementing HTTP/2 already. The growth of TLS interception
will undoubtedly have reduced TLS ability to bypass middleware.


The middlware argument for TLS is a red herring.


> This is
> mostly because of intermediaries that believe they understand HTTP,
> but don't do it very well (HAProxy is a good example I can think off
> of the top of my head). These intermediaries are usually transparent
> to HTTP/1.1 users, but they will likely break HTTP/2 traffic over port
> 80.

Those of us participating in the WG have already ensured that our
software, even legacy installs, interoperates properly with HTTP/2 to
trigger HTTP/1.1 fallback cleanly during the transition. The HTTP/2
protocol is also a lot more strict syntactically so many mistakes and
problems are simply no longer possible once the software is upgraded.


> Chrome and Firefox are therefore acting in the interest of both
> users and operators when they forbid this kind of traffic. They're
> saving your users from thinking your website is broken because their
> ISP deployed some terrible intermediate 'service' that mangles HTTP/2
> (consider Comcast's injection of HTTP headers, for example).

Injection of headers is compliant with HTTP (both versions).

One can as easily point at the many millions of users forced to endure
horrible network lag issues and sometimes outright DoS when Chrome
implemented SDCH encoding.


Dont kid yourself about browsers protecting either users or websites -
at least no more than they need to make gains in the browser wars. We
have a loooong laundry list of things they refuse to do that would
vastly improve end users privacy, security, and website efficiency.
Their focus IME is towards their own corporate goals (as one should expect).

>
> At this point in time, my HTTP/2 implementation does not support
> plaintext HTTP/2. I will add support for it in the next few weeks, but
> I do not expect it to work in the vast majority of cases, and will be
> emitting warning logs to that effect.
>

Are you emitting similar warnings for all HTTP/2-over-TLS failures?
 You will find a lot of middleware out there these days decrypting the
TLS and demanding HTTP/1 inside. The magic PRI prefix "request" works
the same regardless of TLS usage - as it was designed to.

Amos

Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Cory Benfield
On 30 March 2015 at 10:27, Amos Jeffries <[hidden email]> wrote:
> So your answer is "Just use HTTP/1.1" ?
>
> Regardless of how long the transition would take one of the goals of
> HTTP/2 is to replace it. *Any* network which is forced to stay with
> HTTP/1 simply because of a missing protocol capability is a failure of
> HTTP/2.

What protocol capability would that be? As I said far earlier in this
thread, HTTP/2 supports plaintext: Chrome and Firefox don't support
it. The protocol is capable: the implementations are not.

If the protocol suffers problems from intermediaries not understanding
HTTP/1.1, then yes, there was a failure in the protocol when we chose
to use TCP port 80 for plaintext. We can deal with that problem if and
when it does arise.

>> In this case I think Google and Firefox are probably right: HTTP/2 in
>> plaintext is likely to break frequently and mysteriously.
>
> Guesses and supposition. Look at who you are throwing those arguments at
> ... the very authors of the major middleware implementations.

I apologise for tarring with the same brush, that was never my
intention. However, I'm talking to authors of *two* of the major
middleware implementations. There are lots of them, some of which do
not support HTTP/2 and may never (varnish leaps to mind). Many
intermediaries will support HTTP/2 in plaintext well and cleanly:
those are not the ones I believe will cause problems. My worry is
*bad* middleware implementations that assume that all port 80 traffic
is HTTP/1.1, and therefore make unexpected modifications to traffic.
It is not unreasonable to want to avoid that problem by preventing
intermediaries seeing HTTP/2.

> The Chrome choice was based on SPDY metrics IIRC. Which measured how
> many connections over TLS were force to "just use HTTP/1.1" versus
> allowed to use SPDY. That was done under conditions where *none* of the
> middleware supported SPDY and TLS was able to supply a bypass.
>
> Neither of those measurement conditions are true for HTTP/2. We major
> middleware implementations authors participate in the WG and are
> actively implementing HTTP/2 already. The growth of TLS interception
> will undoubtedly have reduced TLS ability to bypass middleware.
>
>
> The middlware argument for TLS is a red herring.

That is as may be, but you're arguing with the wrong person. I've
already said I plan to support HTTP/2 in plaintext in my
implementation. I'm simply repeating what my concerns are with how
successful it will be, certainly in the short term. My response was to
a question asking why HTTP/2 requires TLS, and I was saying that the
protocol does not, but some implementations do.

> Injection of headers is compliant with HTTP (both versions).

Sure is, but I was talking about doing it *badly*, which is not the
same thing. For every good, up-to-date HTTP intermediary there are two
bad ones (usually written by cowboys like me). The same is true of
servers and clients, of course, but the difference is that bad servers
are under the control of site administrators (incentivised to improve
user experience) and bad clients are under the control of users
(incentivised to change to a working client). Bad intermediaries are
often transparent and under the control of an unrelated third party.

Obviously, this is a generalisation, but it certainly applies quite widely.

> One can as easily point at the many millions of users forced to endure
> horrible network lag issues and sometimes outright DoS when Chrome
> implemented SDCH encoding.
>
>
> Dont kid yourself about browsers protecting either users or websites -
> at least no more than they need to make gains in the browser wars. We
> have a loooong laundry list of things they refuse to do that would
> vastly improve end users privacy, security, and website efficiency.
> Their focus IME is towards their own corporate goals (as one should expect).

Yes, we can all accept blame here. The difference, as I mention above,
is in what those involved in an HTTP transaction can do. They have
more power over servers and clients than they do over intermediaries.

>> At this point in time, my HTTP/2 implementation does not support
>> plaintext HTTP/2. I will add support for it in the next few weeks, but
>> I do not expect it to work in the vast majority of cases, and will be
>> emitting warning logs to that effect.
>>
>
> Are you emitting similar warnings for all HTTP/2-over-TLS failures?
>  You will find a lot of middleware out there these days decrypting the
> TLS and demanding HTTP/1 inside. The magic PRI prefix "request" works
> the same regardless of TLS usage - as it was designed to.

Actually, I treat HTTP/2-over-TLS failures more aggressively: I throw
exceptions. This is primarily a security-conscious move, attempting to
maintain the semantics of HTTPS. At some stage I'll likely get a
feature request to allow this behaviour, but until that time I'm
holding HTTP/2-over-TLS to a higher standard than plaintext HTTP/2.

Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Adrien de Croy
In reply to this post by Cory Benfield

well from where I stand there is a certain amount of duress being
applied to move people to TLS.

* browser vendors saying they won't support plaintext (I wonder how long
that will last)
* not really much effort going into working through issues with
plaintext version since it's always supposedly assumed that it won't
really be used and people will stick with 1.1 or go to https, and issues
will be solved.  Somehow.  Maybe.  Hopefully.

not many other options have been seriously considered for solving the
presumed problem of bad things happening on port 80.  Like moving to
another port.  100 is still available.

It is reasonable to want to avoid bad things but there are other ways
than TLS, but thanks to the push to https everywhere now everyone has a
MITM that will probably make port 443 just as broken as port 80.  Maybe
not quite, since I guess ISPs are less likely to do that.  But still a
lot worse now than 2 years ago.

Not to mention the concerns around moving en masse to TLS and what that
will do for the security of TLS itself.  I'm not sure it's ready for the
load.  CA compromises will affect a lot more sites.  They do happen and
will continue to do so, especially as the bounty goes up by a few orders
of magnitude.  A lot of eggs going into not many (CA) baskets.

Maybe we should be putting the effort into that first - solving issues
with PKI before loading the whole internet onto it.  Maybe they are
already doing that.

Adrien



------ Original Message ------
From: "Cory Benfield" <[hidden email]>
To: "Adrien de Croy" <[hidden email]>
Cc: "Yoav Nir" <[hidden email]>; "Glen" <[hidden email]>;
"[hidden email]" <[hidden email]>
Sent: 30/03/2015 9:26:27 p.m.
Subject: Re: 2 questions

>On 30 March 2015 at 04:15, Adrien de Croy <[hidden email]> wrote:
>>
>>  I can buy that 1/3 of web requests use TLS.
>>
>>  however that does not apply to 1/3 of web sites using TLS. Probably
>>just FB
>>  and google alone account for 1/3 of web requests.
>>
>>  There are surely hundreds of millions of sites. That's at least tens
>>of
>>  millions of administrators who will need to take on the burden of
>>making TLS
>>  work on their site. Many will not see any point in this. Pretty much
>>all
>>  the sites that felt a need to deploy TLS will have already done so,
>>and the
>>  others will not thank the IETF or google or the chromium project for
>>  attempting to force costs on them.
>
>No-one is being *forced* to do anything. HTTP/1.1 is not going away.
>If you dig back through the archives of this working group you'll
>repeatedly find statements from almost all camps that HTTP/1.1 will be
>around for the foreseeable future. Website owners that cannot set up
>TLS will still find plenty of support for plaintext HTTP.
>
>In this case I think Google and Firefox are probably right: HTTP/2 in
>plaintext is likely to break frequently and mysteriously. This is
>mostly because of intermediaries that believe they understand HTTP,
>but don't do it very well (HAProxy is a good example I can think off
>of the top of my head). These intermediaries are usually transparent
>to HTTP/1.1 users, but they will likely break HTTP/2 traffic over port
>80. Chrome and Firefox are therefore acting in the interest of both
>users and operators when they forbid this kind of traffic. They're
>saving your users from thinking your website is broken because their
>ISP deployed some terrible intermediate 'service' that mangles HTTP/2
>(consider Comcast's injection of HTTP headers, for example).
>
>At this point in time, my HTTP/2 implementation does not support
>plaintext HTTP/2. I will add support for it in the next few weeks, but
>I do not expect it to work in the vast majority of cases, and will be
>emitting warning logs to that effect.


Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Yoav Nir-3

> On Mar 30, 2015, at 3:29 PM, Adrien de Croy <[hidden email]> wrote:
>
>
> well from where I stand there is a certain amount of duress being applied to move people to TLS.
>
> * browser vendors saying they won't support plaintext (I wonder how long that will last)
> * not really much effort going into working through issues with plaintext version since it's always supposedly assumed that it won't really be used and people will stick with 1.1 or go to https, and issues will be solved.  Somehow.  Maybe.  Hopefully.
>
> not many other options have been seriously considered for solving the presumed problem of bad things happening on port 80.  Like moving to another port.  100 is still available.
>
> It is reasonable to want to avoid bad things but there are other ways than TLS, but thanks to the push to https everywhere now everyone has a MITM that will probably make port 443 just as broken as port 80.  Maybe not quite, since I guess ISPs are less likely to do that.  But still a lot worse now than 2 years ago.

Not quite. ALPN is carefully engineered to play nice with MitM. The MitM that are installed now (and for the last 8 years) will easily strip the ALPN extension and downgrade client and server to HTTP/1.

Yoav



Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Roland Zink
In reply to this post by Adrien de Croy
On 30.03.2015 14:29, Adrien de Croy wrote:
>
> well from where I stand there is a certain amount of duress being
> applied to move people to TLS.
>
> * browser vendors saying they won't support plaintext (I wonder how
> long that will last)
me too as this doesn't seem to be much effort
> * not really much effort going into working through issues with
> plaintext version since it's always supposedly assumed that it won't
> really be used and people will stick with 1.1 or go to https, and
> issues will be solved.  Somehow.  Maybe.  Hopefully.
It is easier to upgrade to an http2 capable server than to switch to
https. So I would prefer to see plaintext http2 as well.
>
> not many other options have been seriously considered for solving the
> presumed problem of bad things happening on port 80.  Like moving to
> another port.  100 is still available.
 From a network perspective this seems to logical thing to do. A
different port probably means a different URL scheme (http2 lovely), but
this then shows ossification at the content provider side (not the usual
middle box claim), so I guess this will not fly.
>
> It is reasonable to want to avoid bad things but there are other ways
> than TLS, but thanks to the push to https everywhere now everyone has
> a MITM that will probably make port 443 just as broken as port 80.  
> Maybe not quite, since I guess ISPs are less likely to do that.  But
> still a lot worse now than 2 years ago.
TLS is not end to end. It is well adjusted to the content provider side
which could choose where to terminate TLS and can throw in any number of
third parties which even gets delivered the original URL through the
referer header. Servers can impersonate any number of identities and get
hints how to cheat (SNI). The user on the other side has no choice, she
doesn't get notified about the third parties and can't deploy any
infrastructure to protect her.
>
> Not to mention the concerns around moving en masse to TLS and what
> that will do for the security of TLS itself.  I'm not sure it's ready
> for the load.  CA compromises will affect a lot more sites. They do
> happen and will continue to do so, especially as the bounty goes up by
> a few orders of magnitude.  A lot of eggs going into not many (CA)
> baskets.
>
Giving the Internet to a small number of CAs seems also to be risky. If
they don't like your opinion they can just revoke your certificate.
Currently you can just get a different certificate from somebody else
but for example with key pinning this may become more difficult.

Encryption costs energy. I heard different numbers and the numbers seem
to go down over the years but fighting against global warming seems not
to be an IETF goal.

In my opinion the discussion about using http2 and TLS should be
separate and luckily http2 has both cleartext and TLS.

Regards,
Roland


Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Martin Thomson-3
In reply to this post by Yoav Nir-3
On 30 March 2015 at 08:03, Yoav Nir <[hidden email]> wrote:
> Not quite. ALPN is carefully engineered to play nice with MitM. The MitM that are installed now (and for the last 8 years) will easily strip the ALPN extension and downgrade client and server to HTTP/1.

I'm sure that this statement makes some people very sad.

That said, I can't see how a box that is able to MitM TLS can be
prevented from doing more than ALPN stripping.  If the client trusts
it, then it's got carte blanche access.

Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Walter H.
In reply to this post by Amos Jeffries-2
On 30.03.2015 10:45, Amos Jeffries wrote:

> On 30/03/2015 12:34 a.m., Walter H. wrote:
>> On 29.03.2015 03:19, Constantine A. Murenin wrote:
>>> On 2015-03-28 7:43, Glen wrote:
>>>> 1. What were the reasons for HTTP/2 not requiring TLS?
>>>>
>>>> Is there a significant performance consideration, is it related to
>>>> the cost of certificates (which is now fairly low or even free), or
>>>> are there other technical reasons?
>>> This is incorrect.  The cost of certificates for webmasters is not
>>> "fairly low or even free".
>> In fact they are fairly low or even free, because nobody tells you
>> buying at the most expensive dealer ;-)
>>
>> just try e.g. StartCom ;-)
> Tried that. Got a far as where their Terms and Conditions forbid me from
> getting certs on behalf of my clients.
can't your clients do this for themselves?
(didn't you think, that when someone offers something really for free,
that it may not be reselled ...)
>>> Think of all the consumer electronic devices like the 15 USD 802.11n
>>> wireless routers -- who's going to be paying for their certificates?
>> any cheap routing box, either with WLAN or not does use self-signed
>> certificates; and business environments have different use cases and/or
>> hardware;
>> and there they can have their own CA, too ...
> Go ahead. Try it. The modern browsers will all throw up confusing
> looking popups about security thingys,
I have no problems doing this ...
>   red stop signs, unlocked padlocks, etc in front of their users on each request using self-signed
> certs and
using just a self signed CA cert in e.g. squid is not enough ...
>   Chrome will not even permit the device control pages to be opened.
a default setting, I'm using Chrome because of heavy security bugs in
newer FF releases ...



smime.p7s (7K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [Moderator Action] 2 questions

Glen
In reply to this post by Glen
Sending again.

On 2015/03/29 16:50, Glen wrote:

> Thanks for the replies.
>
> 1. As far as I understand it (which is not very far), opportunistic encryption is neither "by default" (since it requires extra server-side configuration) nor secure (no MITM protection, etc.)
>
> I'm okay with HTTP/2 without TLS, however (my opinion):
>
> a) User agents MUST show a security warning before you submit data over HTTP (you could have a "remember this choice" option per-user and per-domain). As far as I know, this is not currently implemented in any browsers (I think if you submit to an HTTP domain from an HTTPS one, you may receive a warning). The main point is, it's more important that users know that they're on an INSECURE domain, than it is that they are on a SECURE one (by then it's too late).
>
> b) All vendors should support it. If I decide that my site does not require encryption (f.e. it's a read-only website or a website that runs within a LAN [like a router page]), then I should not be forced to use it in order to run over HTTP/2. I think that Mozilla and Google probably have good intentions, but I don't think that they have made the right decision at all. We don't want to go back to the stage where every browser was doing its own thing, and causing massive headaches for developers and even end-users. There are ways (see above) to make the web more secure (by default) without forcing anything on anyone. It's kind of like smoking – it's bad for you, and we should warn against it, but at the end of the day every person reserves the right to do as they please (screw up their lungs, or submit their (possibly) private information over an insecure connection.
>
> 2. Not being able to safely compress content seems like a big problem. Are there any (content) compression algorithms that are not susceptible to these vulnerabilities, or has there been any discussion regarding the development of a new algorithm to combat these issues? From what I know, compressing content can have a significant (positive) effect on performance, so it would be really unfortunate if this was no longer possible without exposing your website to various security exploits.
>
> Glen.
>
> On 2015/03/28 16:43, Glen wrote:
>> Hi,
>>
>> I have 2 questions, if I may.
>>
>> 1. What were the reasons for HTTP/2 not requiring TLS?
>>
>> Is there a significant performance consideration, is it related to the cost of certificates (which is now fairly low or even free), or are there other technical reasons?
>>
>> It would be nice if the web was just "secure by default", and I would have thought that now would be the right time to move in that direction.
>>
>> Also, at least 2 of the major browser vendors have said that they won't be supporting HTTP/2 without TLS, so surely no one is going to want to run their website without it?
>>
>> 2. Are the BREACH and CRIME exploits still applicable, especially with regard to content (body) compression? If so, does that mean that it's not possible to compress content (with gzip, for example) and still maintain security?
>>
>> Please respond as if I were a layman, as my knowledge on these subjects is somewhat limited.
>>
>> Thanks.
>




Reply | Threaded
Open this post in threaded view
|

Re: 2 questions

Walter H.
In reply to this post by Mike Bishop
On 30.03.2015 02:50, Mike Bishop wrote:
> You're skipping the discussion about why price of the cert is not the cost of running TLS.  There's admin overhead in renewing the cert for each domain, there's network infrastructure overhead in providing each domain a unique IP address (because you can't guarantee every client supports SNI, much as we'd like to), and that additional network infrastructure cost means hosting becomes more expensive.
that a server needs more cpu, memory and more other resources when
sending content using TLS in comparison to just send them plain, this is
true;
also it is true, that you need someone who renews the certs; also that
you need a unique IP address; but it is not impossible doing so, the
available resources would be enough;
even IP addresses;
let me explain a little example at the end, why you are right and more
wrong at the same time;

> But fundamentally, the argument was that if HTTP/2 needed to cover the same scenarios as HTTP/1.1,
not really; or do you really think there is the need of something new
that is the same as the old?

here the example:

think of someone or company uses Internet for e-commerce; e.g.
presenting his products is public for anybody; this doesn't need to be
presented in TLS,
but when someone enters data to order the products, this must be done
using TLS;
compareable to a bank; the presentation of all products of the bank -
e.g. interest rates, common terms and conditions, ... - can be presented
for the public without the need of TLS, but the service of electronic
banking must only be with TLS;

now think of the "next step", the website shows advertising for what the
company gets money, that reduces the hosting costs;
this can be done in 2 ways: using a 3rd party, this is less efficient,
compare it to a folder together with a newspaper;
or without, the most efficient way, compare it to a newspaper that has
printed the advertisings anywhere between
the news and other informations;

now think of the people that do not want see the advertisings; with the
newspaper it is easy to bring them showing on the advertisings,
just print them anywhere between the news; an enclosed folder with
advertisings can be thrown away without being really noticed;

a little analogy: a user can easily block 3rd party advertisings by
blocking just these domains; for this it would not make any difference
if it is sent plain or encrypted using TLS,
because this blockings are domain/host specific;
if the advertisings are done without 3rd party, then a user might block
specifics URLs - this and the above steps can be done centrally at a
proxy server;
but when the whole is only sent encrypted using TLS, anybody can only
stop the advertisings from being loaded by himself/herself without
breaking the
end-to-end encryption; a proxy server doesn't help to prevent this,
except it does man-in-the-middle;

so now the question for you: do you really think, TLS costs you so much
more that any way of reducing the whole hosting costs isn't it worth of
doing TLS?

by the way:
can you please read this:
https://datatracker.ietf.org/doc/draft-hoehlhubmer-https-addon/
I want this to be a RFC

Thanks,
Walter



smime.p7s (7K) Download Attachment
12345