FW: Feedback on the Strict-Transport-Security specification

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

FW: Feedback on the Strict-Transport-Security specification

Eric Lawrence-4

Forwarding at the request of the STS-draft authors.

 

From: Eric Lawrence
Sent: Friday, October 09, 2009 11:42 AM
To: 'Steingruebl, Andy'; '[hidden email]'
Cc: Hodges, Jeff; 'Collin Jackson'
Subject: RE: Strict-Transport-Security specification

 

Hey, guys!  You both asked me for feedback on the STS spec a while ago and I’ve finally managed to dig out enough to provide some feedback.

 

I’m excited to see the progress here, and most of the issues I’ve noted are quite minor. I am a bit concerned that the spec doesn’t mandate behavior for mixed-content; I know such requirements would be controversial and non-trivial, but without the behavior being mandated by the spec, I think we’re likely to see divergent and incompatible behavior on STS sites.

 

Thanks,

Eric

 

 

 

Hopefully this is still the latest draft?  http://lists.w3.org/Archives/Public/www-archive/2009Sep/att-0051/draft-hodges-strict-transport-sec-05.plain.html

 

Editorial &  issues

[Section: Abstract] defines a mechanism to enabling Web sites

 

[Section 1: Introduction] I’ve never seen a spec use the word annunciate before. Any reason not to prefer “announce” or “display”?

 

[Section 1: Introduction] or if a server's domain name appears incorrectlyIsn’t the problem here typically that the domain name does not appear at all?

 

[Section 1 : Introduction] a HTTP request header field is used to convey site policy to the UA.  This specification proposes a HTTP response header, not a request header.

 

[Section 2.2: Policy Summary]  terminates, without user recourse, any secure transport connection attempts upon any and all errors. I’m not convinced that any and all is the right way to go here. Shouldn’t this spec call out each certificate and certificate chain error?  Otherwise, should I consider the failure in a different protocol level (e.g. gateway or DNS hiccup) as a fatal error?

 

[Section 2.4.2: Detailed Core Requirements]: 4.UAs need to re-write all insecure UA "http" URI loads to use the "https" secure scheme for those web sites for which secure policy is enabled.  This requirement is insufficiently specific and does not really explain what “rewrite” means?  Does this mean that the HTML parser will detect any insecure-but-should-be URIs and rewrite them within the markup, such that JavaScript could observe the change in the HREF attribute?  Or does it simply mean that upon de-reference the URI is automatically “upgraded” to HTTPS with no notice to the caller?

 

[Section 2.4.2: Detailed Core Requirements]: Requirements #5 and #6 are problematic because browsers (generally speaking) often don’t have rock solid knowledge of where the proper “private domain” / “public suffix” transition occurs.

 

[Section 4: Terminology] The production of the “Effective Request URI” omits the protocol scheme.  I assume this was inadvertent and that the protocol scheme was meant to be included.

 

[Section 5.1: Syntax] The spec should probably specify whether the “delta-seconds” value expected to be adjusted by the HTTP “Age” response header, if present.

 

[Section 5.1: Syntax] Are the tokens intended to be interpreted case-sensitively? 

 

[Section 5.1: Syntax] What should be done if the server has multiple Strict-Transport-Security response header fields of different values? 

 

[Section 5.1 Syntax] Typo: Strict-Transport-Sec HTTP Response Header

 

[Section 6.1: HTTP-over-Secure-Transport Request Type] Why must the server include this header on every response?  This seems likely to prove prohibitively difficult across, say, Akamai load balancers for images, etc.  What happens if the server fails to include the response header on a given response?

 

[Section 6.2] I’m not sure why the spec contains the confusing terminology “HTTP-over-Secure-Transport” whilst simultaneously demanding that various URLs be converted to specifically “HTTPS”, which would preclude the flexibility allowed by the more awkward terminology?

 

[Section 6.2] A STS Server must not include the Strict-Transport-Security HTTP Response Header in HTTP responses conveyed over a non-secure transport.  Why not?  It seems harmless to include if the UA doesn’t respect it.

 

[Section 7.1] What if the STS response header is present but contains no tokens?  7.1 suggests that the header alone indicates an STS server.

 

[Section 7.1.1; Design Decision #4] I know there are reasons to avoid using secure protocols to IP-literal addressed servers, but in Intranet environments this may be expected and desirable. Why forbid it here?

 

[Section 7.1.2] While I understand the restrictions imposed here, it is something of a shortfall that https://www.example.com cannot enforce STS for requests to http://example.com.  The threat here is obvious: the user typically visits https://www.paypal.com and gets STS applied, but in a coffee shop or untrusted network, inadvertently types just “paypal.com” in the address bar.  Because STS isn’t applied cached for that server, possible exploit occurs.

 

[Section 7.3] If there are any certificate errors in a HTTPS request, you better not have gotten any HTTP “header fields” back from the server; if you did, you’ve implemented HTTPS incorrectly.

 

[Section 9] expiry time match that for the web site's domain certificate I’m not sure I understand the intent of synchronizing such expiration?  Wouldn’t you explicitly not want to synchronize expiration of STS and the certificate, such that the expired certificate is properly no longer useful when it expires?

 

[Section 10: UA Implementation Advice; Section 2.4.3: Ancillary Requirements;] This portion of the spec troubles me the most. I was looking forward to this spec settling things once and for all and requiring mixed content to be treated as a fatal error. However, the spec doesn’t require that, and thus I think it’s missing out on an absolutely critical opportunity.  If UAs differ in behavior (e.g. IEvN silently blocks “without recourse” mixed content but Firefox does not) then it’s likely that users and developers will erroneously conclude that the more secure UA is “broken” or “buggy.”

 

Having noted this, I do need to observe that controlling mixed content is harder for IE than for any other browsers, because IE requires add-ons to go directly to the network stack (WinINET/URLMon) while competitive browsers typically expect that the add-on will use NPAPIs to request that the host browser collect data on their behalf.

 

Other cases of “mixed” content: the WebSocket specification, which supports both secure and insecure modes.  Ditto for FTP/FTPS.

 

[Section 10] I was disappointed not to see any mention of the privacy implications of STS hostname storage, and/or recommendations on how such storage should interact with browser “private modes” and/or cleanup features.

 

[Section 10] I was happy to see the section on vendor-configured/default STS policy.  I think this is a promising mechanism.

 

[Section 11.1] I think the discussion of DoS bears further explanation, on the grounds that it doesn’t describe what a “fake STS header” is and how it can be set. More specifically, it doesn’t mention the aspects of the preceding spec that make this attack difficult to execute.

 

[Section 11.3] The NTP attack is very cool.

 

Other thoughts: Should STS offer a flag such that all cookies received from the STS server would be automatically upgraded to “SECURE” cookies?

 

One threat not mentioned is cross-component interactions.  This spec appears to primarily concern browsers, while the real-world environment is significantly more complex.  For instance, there are a number of file types which will automatically open in applications other than the browser when installed; those other applications may perform network requests to an STS host using a network stack other than that provided by the browser. That network stack may not support STS, or may not have previously cached STS entries for target servers. Thus a threat exists that out-of-browser requests could be induced that circumvent STS.

 

-Eric

Reply | Threaded
Open this post in threaded view
|

Re: FW: Feedback on the Strict-Transport-Security specification

Adam Barth-5
Thanks for your feedback.  Comments inline.  (I've skipped the
editorial comments.)

On Tue, Oct 27, 2009 at 5:01 PM, Eric Lawrence
<[hidden email]> wrote:
> I am a bit concerned that the spec doesn’t mandate behavior for
> mixed-content; I know such requirements would be controversial and
> non-trivial, but without the behavior being mandated by the spec, I think
> we’re likely to see divergent and incompatible behavior on STS sites.

There's a tension about what to put in STS and what is more
appropriate for a more general policy delivery mechanism, like
Content-Security-Policies <https://wiki.mozilla.org/Security/CSP>.
The main reason not to include STS in CSP that the browser needs to
know the STS policy before it receives the CSP header because the
browser needs to hand errors during the SSL / TLS handshake.

In the case of mixed content, we can wait until we receive an HTTP
header, so we don't need to play tricks with time scoping (i.e.,
Max-Age) or URL scoping (i.e., includeSubDomains).  I'd like to see
browser vendors expose policy levers for controlling mixed content,
but I'm not sure whether STS or CSP is a better home for that
directive.

> Hopefully this is still the latest draft?
> http://lists.w3.org/Archives/Public/www-archive/2009Sep/att-0051/draft-hodges-strict-transport-sec-05.plain.html

I believe it is.

> [Section 2.4.2: Detailed Core Requirements]: 4.UAs need to re-write all
> insecure UA "http" URI loads to use the "https" secure scheme for those web
> sites for which secure policy is enabled.  This requirement is
> insufficiently specific and does not really explain what “rewrite” means?
> Does this mean that the HTML parser will detect any insecure-but-should-be
> URIs and rewrite them within the markup, such that JavaScript could observe
> the change in the HREF attribute?

This is how our original prototype worked, but I don't think that's
how the real implementations should work.

> Or does it simply mean that upon
> de-reference the URI is automatically “upgraded” to HTTPS with no notice to
> the caller?

What I'd recommend here is to treat the HTTP-to-HTTPS "rewrite" as a
simulated 307 redirect, like the one the site is supposed to provide
if we actually retrieved the HTTP URL.

> [Section 2.4.2: Detailed Core Requirements]: Requirements #5 and #6 are
> problematic because browsers (generally speaking) often don’t have rock
> solid knowledge of where the proper “private domain” / “public suffix”
> transition occurs.

I think there might be some confusion about what "higher-level" means
in this context.  The intent is that:

1) both example.com and foo.example.com could set policy for
bar.foo.example.com.
2) Neither bar.foo.example.com nor foo.example.com could set policy
for example.com.
3) bar.foo.example.com cannot set policy for foo.example.com.
4) foo.example.com cannot set policy for qux.example.com.

etc.

I don't think we need a notion of a public suffix to enforce these rules.

> [Section 5.1: Syntax] Are the tokens intended to be interpreted
> case-sensitively?

Yes.  I think this is implied but the grammar style Jeff using, but it
might be worth noting for us non-ABNF experts.

> [Section 5.1: Syntax] What should be done if the server has multiple
> Strict-Transport-Security response header fields of different values?

My opinion is we should honor the most recently received header, both
within a request and between requests.

> [Section 6.1: HTTP-over-Secure-Transport Request Type] Why must the server
> include this header on every response?  This seems likely to prove
> prohibitively difficult across, say, Akamai load balancers for images, etc.
> What happens if the server fails to include the response header on a given
> response?

I think that's a server conformance requirement.  The UA conformance
requirements are set up so that this doesn't matter too much.  As long
as you get your entry in the STS cache, you'll be fine.

> [Section 6.2] A STS Server must not include the Strict-Transport-Security
> HTTP Response Header in HTTP responses conveyed over a non-secure
> transport.  Why not?  It seems harmless to include if the UA doesn’t respect
> it.

Again, this is a server conformance requirement that doesn't affect
UAs.  It doesn't make sense to send the header here.  We might as well
prohibit servers from sending it.

> [Section 7.1] What if the STS response header is present but contains no
> tokens?  7.1 suggests that the header alone indicates an STS server.

That sounds like a bug.  An empty header should be a no-op.

> [Section 7.1.1; Design Decision #4] I know there are reasons to avoid using
> secure protocols to IP-literal addressed servers, but in Intranet
> environments this may be expected and desirable. Why forbid it here?

I don't think there's any way to provide security in this case.  My
understanding is that anyone can get these certificates.  Is there
some benefit to supporting these cases?  Maybe CAs might change their
policies in the future?

> [Section 7.1.2] While I understand the restrictions imposed here, it is
> something of a shortfall that https://www.example.com cannot enforce STS for
> requests to http://example.com.  The threat here is obvious: the user
> typically visits https://www.paypal.com and gets STS applied, but in a
> coffee shop or untrusted network, inadvertently types just “paypal.com” in
> the address bar.  Because STS isn’t applied cached for that server, possible
> exploit occurs.

The thought is that https://www.paypal.com/ can load an image from
https://paypal.com/ to enable STS for the root domain.  Letting
www.paypal.com opt in for paypal.com is going to lead to a bunch of
unhappy people who type "paypal.com" and reach an hard blocking page
if there is a CN mismatch.

> [Section 10: UA Implementation Advice; Section 2.4.3: Ancillary
> Requirements;] This portion of the spec troubles me the most. I was looking
> forward to this spec settling things once and for all and requiring mixed
> content to be treated as a fatal error. However, the spec doesn’t require
> that, and thus I think it’s missing out on an absolutely critical
> opportunity.  If UAs differ in behavior (e.g. IEvN silently blocks “without
> recourse” mixed content but Firefox does not) then it’s likely that users
> and developers will erroneously conclude that the more secure UA is “broken”
> or “buggy.”

I responded to this at the top of the email.  There seems to be some
amount of support for making STS imply blocking mixed content.  If you
think this is what we should do, then we can do it.  One concern I
have here is that browser's mixed content detection is notoriously
buggy, but maybe this requirement will motivate us to get it right.

> Having noted this, I do need to observe that controlling mixed content is
> harder for IE than for any other browsers, because IE requires add-ons to go
> directly to the network stack (WinINET/URLMon) while competitive browsers
> typically expect that the add-on will use NPAPIs to request that the host
> browser collect data on their behalf.

This picture is actually even less rosy.  Some popular NPAPI plug-ins
use a mix of browser-provided and OS-provided networking services
because the NPAPI network APIs lack some basic functionality (like
setting headers on GET requests).

> Other cases of “mixed” content: the WebSocket specification, which supports
> both secure and insecure modes.  Ditto for FTP/FTPS.

and CORS.  There's a lot of complexity to mixed content.

> [Section 10] I was disappointed not to see any mention of the privacy
> implications of STS hostname storage, and/or recommendations on how such
> storage should interact with browser “private modes” and/or cleanup
> features.

We should add this discussion.  The implementation in Chrome stores
only hashes of host names and clears the cache when the user resets
browser data.  In "private mode", Chrome makes a fresh STS cache and
store the directives in memory only (which is relatively useless).

> Other thoughts: Should STS offer a flag such that all cookies received from
> the STS server would be automatically upgraded to “SECURE” cookies?

I think this is a good idea for an new token in a future version.  I'm
not sure whether Jeff has updated the grammar in the spec yet, but the
plan is to use a forward-compatible syntax that lets vendors
experiment with more tokens.

> One threat not mentioned is cross-component interactions.  This spec appears
> to primarily concern browsers, while the real-world environment is
> significantly more complex.  For instance, there are a number of file types
> which will automatically open in applications other than the browser when
> installed; those other applications may perform network requests to an STS
> host using a network stack other than that provided by the browser. That
> network stack may not support STS, or may not have previously cached STS
> entries for target servers. Thus a threat exists that out-of-browser
> requests could be induced that circumvent STS.

For Internet Explorer, I would recommend coupling the STS cache with
the WinInet cookie jar.  That way, Secure cookies in Internet Explorer
would be protected by STS even in external applications.

Thanks for your detailed comments.

Adam