IE Team's Proposal for Cross Site Requests

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
122 messages Options
12345 ... 7
Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Maciej Stachowiak


On Mar 14, 2008, at 4:59 PM, Eric Lawrence wrote:

>
> =====
>
> Maciej Stachowiak [[hidden email]] asked:
> <<How does this compare to the Cross-Site Extensions for  
> XMLHttpRequest standard that is being developed by Web API and Web  
> App Formats (and as implemented in Firefox betas)? From Apple's  
> point of view we would like to have a unified standard in this area.>>
>
> We believe that the XDR proposal exposes a smaller surface of attack  
> than the Cross-Site extensions for XHR.  Specifically, it can be  
> demonstrated that the capabilities exposed by XDR are virtually  
> identical to the capabilities exposed by existing HTML tags.  The  
> one exception (obviously) is that the XDR object allows examination  
> of response bodies cross-domain if and only if the server explicitly  
> indicates that such access is permissible via the  
> XDomainRequestAllowed header.

But not exactly identical, since forms can't be used to POST XML  
content with a proper MIME type cross-domain. This is actually more  
restricted in XHR2+AC.

> =====
>
> Maciej Stachowiak [[hidden email]] asked, in part:
> <<I am also not sure if a DNS rebound cross-domain XHR with POST or  
> some other method can do anything that you can't do with a cross-
> domain form submission. You can set custom headers, but that seems  
> unlikely to make the difference between safe and unsafe.>>
>
> It's certainly a possibility.  For instance, consider a device which  
> accepts SOAP XML as input  The designers of the device were wise to  
> note that a cross-domain form submission could be made (encType =  
> text/plain) that contains XML-formatted content, and thus they  
> devised an anti-CSRF mechanism of rejecting requests that do not  
> bear a proper SOAPAction header.  Such restriction properly blocks  
> CSRF via HTML forms, but is put at risk if a cross-domain XHR  
> request is able to send arbitrary headers.

On the other hand, if the anti-CSRF mechanism were checking for a  
proper XML Content-Type instead of looking for a SOAPAction header,  
XDR would be more vulnerable than XHR2+AC. If the server also checks  
the Host header, then XHR2+AC would be completely safe (since no DNS  
rebinding attack is then possible).

In any case, it seems like this could be addressed through a strict  
whitelist of allowed request headers, including such critical headers  
as Accept and Accept-Language but ruling out SOAPAction. Or XHR2+AC  
could even block all custom headers on cross-site requests. Let's take  
that point as negotiable. Allowed methods are also a negotiable point.  
These issues both address what may be customized on the request, but  
the most obvious incompatibilities between XDomainRequest and XHR2+AC  
are the API and protocol.

What I'd like to understand is whether there are security benefits to  
the API and protocol differences. Or if not, if there is any other  
reason to prefer the Microsoft-proposed API and protocol to the  
current draft standards. Can anyone from Microsoft address that point?

Regards,
Maciej

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Henri Sivonen
In reply to this post by Eric Lawrence-4

On Mar 15, 2008, at 01:59, Eric Lawrence wrote:

> XDR is intended for "public" data.  We explicitly suggest that  
> Intranet servers do not expose private data through this mechanism.  
> In order to ensure that no existing servers/services (in any zone)  
> are put at risk, XDR does not send credentials of any sort, and  
> requires that the server acknowledge the cross-domain nature of the  
> request via the response header.


In practice, though, cross-site requests for user-specific data are so  
interesting that people will do it anyway. The user will have to trust  
the third-party site with credentials or a token which will be encoded  
in the URI or in the POST payload. The inability to pass credentials/
token in the HTTP headers will not stop communicating that data--it'll  
only be communicated in an inconvenient way.

--
Henri Sivonen
[hidden email]
http://hsivonen.iki.fi/



Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Laurens Holst-2
In reply to this post by Eric Lawrence-4
Eric Lawrence schreef:
> Note that XDR supports only the GET and POST methods, DELETE and other methods are not supported.
>  

I don’t really see how POST is less harmful than DELETE. POST (if used
in a REST-y way) can be used to wreak serious havoc (e.g. spam messages,
overload server data capacity, post viruses, adding new super user
accounts for the hacker, change settings such as passwords, influence
poll results).

Additionally, there are a great number of sites that are using the HTTP
POST method for operations that would be more suitable for PUT and
DELETE. The reason that this happens is probably HTML’s fault, because
it only supports GET and POST, crippling the functionality that HTTP
provides. Non-REST webservices protocols such as XML-RPC and SOAP also
exclusively use POST.

If XDR only supports GET and POST, it encourages sites to use POST to
implement delete functionality and abuse the HTTP protocol because that
is the only way they can get the functionality they desire to work.
Basically, you’re boycotting REST in favour of SOAP.

So, I do not see much benefit in the decision to disallow DELETE but
allowing POST.

> XDR is intended for "public" data.  We explicitly suggest that Intranet servers do not expose private data through this mechanism.  In order to ensure that no existing servers/services (in any zone) are put at risk, XDR does not send credentials of any sort, and requires that the server acknowledge the cross-domain nature of the request via the response header.
>  

I don’t think you’re really keeping users very safe that way. To quote
an example, PHP is by default configured to send session credentials
entirely through sessionID link parameters when cookies are not
available, and thus don’t need cookies or authentication headers. And on
many sites (e.g. phpBB-based forums), communication with the server
(including deletion) happens exclusively through GET and POST requests.
XDR’s restrictions on methods and credentials will not do these sites
any good. Rather, they encourage even more sites to work around the
restrictions.

Additionally, if you really want to keep sites safe in this manner, you
should disallow cross-site POST requests for both XDR and HTML forms.
Otherwise, there is already a breach in the safety, POST is equally
suitable for ‘public’ data as DELETE and PUT are. You should allow those
methods, so that developers can at least provide a proper REST API and
are not forced to overload POST like XML-RPC and SOAP and friends do.

> Laurens Holst [[hidden email]] asked:
> <<So, if I cannot set HTTP headers, how am I supposed to set an Accept
> header to indicate that I e.g. want to receive application/xhtml+xml,
> application/atom+xml, application/*rss*+xml, application/rdf+xml, etc.?
> Your proposal is completely unfriendly to content negotiation. Also,
> there are valid use cases for setting other headers.
>
> I sincerely hope you will fix this issue by creating a blacklist of
> headers instead of disallowing them entirely.>>
>
> We absolutely agree that it is possible to define use cases that XDR does not accommodate.  We believe that XDR enables the most common cross-domain scenarios with negligible impact to the attack surface of existing servers and the browser.
>
> Creating a "blocklist" of headers is problematic as there is no existing mechanism to determine whether a target server will interpret a given header in a particular way.
>
> By way of example, we are aware of servers which utilize custom HTTP request headers as an anti-CRSF mechanism.  Such servers assume that, because the only mechanism currently available in the browser to send custom headers is via XMLHTTPRequest, if such custom headers are present, then the request "must" be from a same-origin XHR object.  Hence, permitting use of custom headers in XDR would expose such servers to attack.  It's absolutely reasonable to argue that such servers never should have made such assumptions, however we do not feel it is appropriate to put servers at risk.
>  
Then create a whitelist, with at least the Accept-* headers on it. They
are clearly defined, and it is doubtful that they are used in a
different manner than described. The use cases are clear and plenty.

> While HTTP-based content-negotiation is certainly well-defined by the HTTP specifications, for operational reasons it is relatively uncommon in the wild.  Different content types are usually served from different URLs; for instance, you can see that Yahoo!-based services use a URL parameter to pivot between JSON and XML format (http://developer.yahoo.com/common/json.html).
>  

I don’t think it is up to Microsoft to decide that content negotiation
is useless and that part of HTTP should not be supported even though it
poses no security risk. Part of the reason that content type negotiation
is reasonably uncommon in the wild is partially because of mediocre
support by user agents. By the way, content language and content
encoding negotiation is pretty common, actually.

Content type negotiation is very useful and I think a very nice feature
of HTTP, offering different ‘views’ (e.g. RSS or RDF, SVG or PNG) of the
same content. With XHR it was finally possible to use content type
negotiation in a good way in browsers, please don’t break it again in XDR.

I also think your argument here is inconsistent with the functionality
XDR provides, since you DO support specifying the content type for POST
entity bodies (contentType property).

If you decide to not allow manipulation of the Accept header anyway, you
should make sure to NOT send an Accept header in the request at all,
because it would not be able to reflect the content that the requestee
intended to receive.

> It's certainly a possibility.  For instance, consider a device which accepts SOAP XML as input  The designers of the device were wise to note that a cross-domain form submission could be made (encType = text/plain) that contains XML-formatted content, and thus they devised an anti-CSRF mechanism of rejecting requests that do not bear a proper SOAPAction header.  Such restriction properly blocks CSRF via HTML forms, but is put at risk if a cross-domain XHR request is able to send arbitrary headers.
>  

This is also the case for XDR, no? The user can specify an arbitrary
POST body, with arbitrary content type.

I think that is all I have to say :).


~Grauw

--
Ushiko-san! Kimi wa doushite, Ushiko-san nan da!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Laurens Holst, student, university of Utrecht, the Netherlands.
Website: www.grauw.nl. Backbase employee; www.backbase.com.


lholst.vcf (196 bytes) Download Attachment
smime.p7s (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Laurens Holst-2
In reply to this post by Sunava Dutta
Sunava Dutta schreef:
> *Properties*
>
> ·         *responseText - *After the server responds, you can retrieve
> the data string through the read-only /responseText /property.
>

When no character encoding is specified in the HTTP response headers,
how can the user parse XML in the proper encoding (as specified in the
<?xml encoding="…"?> processing instruction in the XML)? This is
currently also an issue with responseText in XHR, and an important
reason to use responseXML to retrieve XML content, since that takes the
encoding specified in the document into account.

It seems that as you define it right now, XML can only be read if it is
UTF-8. Although I would personally discourage using any other encoding
than UTF-8 :), there are many sites that do and these should be
supported through the mechanism that XML provides.

Additionally, when no character encoding is specified in the HTTP
response headers, in what encoding will the content be parsed? I believe
standards dictate that application/xml, text/xml and application/*+xml
should be processed as UTF-8 by default, and text/html as ISO 8859-1? I
hope IE’s implementation will deal with this properly, and that we won’t
end up with XML data parsed as ISO 8859-1 when reading it through
responseText.


~Grauw

--
Ushiko-san! Kimi wa doushite, Ushiko-san nan da!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Laurens Holst, student, university of Utrecht, the Netherlands.
Website: www.grauw.nl. Backbase employee; www.backbase.com.


lholst.vcf (196 bytes) Download Attachment
smime.p7s (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Kris Zyp-4
In reply to this post by Laurens Holst-2

> If XDR only supports GET and POST, it encourages sites to use POST to
> implement delete functionality and abuse the HTTP protocol because that
> is the only way they can get the functionality they desire to work.
> Basically, you’re boycotting REST in favour of SOAP.
I completely agree, it shouldn't be the place of the browser to dictate that
developers they must use SOAP style instead of REST, and cripple the
features of HTTP. The full features of REST (PUT and DELETE) weren't
available in traditional web apps, but with Ajax, I belive REST techniques
are rapidly growing in popularity, and will be particularly useful in
cross-site scenarios because authors can leverage HTTP semantics to indicate
meaning with different consumers/providers. I believe the same potential is
possible with Accept headers, especially in the cross-domain world. Accept
headers are not heavily used right now because in the dominantly same-origin
request world because developers usually know a priori what content type is
needed. On the otherhand, with cross-site web services, different consumers
may desire different formats, and the Accept parameters could have awesome
potential for allowing developers to request data in a desired format in
well-understood manner. How foolish it would be to disable these features in
the arena where it would have the most opportunity for benefit.
I believe the full potential of HTTP has a great opportunity to be fully
realized in the cross-site world where leveraging well-defined broadly
understood HTTP features like REST semantics, content-negotiation, partial
content transfer, etc. could vastly improve interoperability, by allowing
consumers and providers to communicate with a rich set of well-defined
capabilities. Crippling HTTP will force developers to use work-arounds which
will only decrease interoperability and will ultimately cause increased use
of hacks which will envitably subvert security rather than benefiting it.
Trying to create a simpler more restrictive form of security does not mean
things will be more secure. Developers use of cross-site scripts with
callbacks to get cross-site data is great example of how an excessively
simple security model has resulted in developers being forced to use
dangerous methods to accomplish their goals.
Thanks,
Kris


Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Sean Hogan

Slap me down if I'm missing something obvious, but doesn't the upcoming
cross-browser postMessage functionality already give us cross-site XHR?

A site that want to facilitate CSXHR just hosts a html page that will do
normal XHR.
This page can be loaded in an iframe and communicates to the parent
window via postMessage to provide CSXHR.

This means that overly restrictive XSR will just force sites to fall
back to this scheme.
Unless I'm missing something obvious.

cheers,
Sean


Naive and untested implementation on the site that wants to facilitate
CSXHR:

http://example.org/xhr.html - loaded into an iframe

<html>
<head>
<script>
document.addEventListener("message", doXHR, true);
function doXHR(event) {
    var  rq = unmunge(event.data);
    var xhr = new XMLHttpRequest;
    xhr.open(rq.method, rq.href, true);
    xhr.onreadystatechange = function() {
       if (xhr.readyState < 4) return;
       if (xhr.status != 200) return;
       event.source.postMessage(munge(rq.method, rq.href,
serializeHeaders(rq), xhr.responseText);
    }
    xhr.send(rq.data);
}
</script>
</head>
</html>


http://example.org/xhr.js - provides ExampleHttpRequest API

var ExampleHttpRequest = (function() {

var iframe = document.createElement("iframe");
iframe.src = "http://example.org/xhr.html";
iframe.style.height = "0px";
document.body.appendChild(iframe);

var ExampleHttpRequest = function() {};

ExampleHttpRequest.prototype.open = function(method, href) {
    this.method = method;
    this.href = href;
    this.headers = {};
}

ExampleHttpRequest.prototype.send = function(data) {
    document.addEventListener("message", callbackXHR, false);
    var messageData = munge(this.method, this.href,
serializeHeaders(this), this.data);
    iframe.postMessage(messageData);
}

function callbackXHR(event) {
    var result = unmunge(event.data);
    // to drunk to fill in the rest
}

return ExampleHttpRequest;

})();



Kris Zyp wrote:

>
>> If XDR only supports GET and POST, it encourages sites to use POST to
>> implement delete functionality and abuse the HTTP protocol because that
>> is the only way they can get the functionality they desire to work.
>> Basically, you’re boycotting REST in favour of SOAP.
> I completely agree, it shouldn't be the place of the browser to
> dictate that developers they must use SOAP style instead of REST, and
> cripple the features of HTTP. The full features of REST (PUT and
> DELETE) weren't available in traditional web apps, but with Ajax, I
> belive REST techniques are rapidly growing in popularity, and will be
> particularly useful in cross-site scenarios because authors can
> leverage HTTP semantics to indicate meaning with different
> consumers/providers. I believe the same potential is possible with
> Accept headers, especially in the cross-domain world. Accept headers
> are not heavily used right now because in the dominantly same-origin
> request world because developers usually know a priori what content
> type is needed. On the otherhand, with cross-site web services,
> different consumers may desire different formats, and the Accept
> parameters could have awesome potential for allowing developers to
> request data in a desired format in well-understood manner. How
> foolish it would be to disable these features in the arena where it
> would have the most opportunity for benefit.
> I believe the full potential of HTTP has a great opportunity to be
> fully realized in the cross-site world where leveraging well-defined
> broadly understood HTTP features like REST semantics,
> content-negotiation, partial content transfer, etc. could vastly
> improve interoperability, by allowing consumers and providers to
> communicate with a rich set of well-defined capabilities. Crippling
> HTTP will force developers to use work-arounds which will only
> decrease interoperability and will ultimately cause increased use of
> hacks which will envitably subvert security rather than benefiting it.
> Trying to create a simpler more restrictive form of security does not
> mean things will be more secure. Developers use of cross-site scripts
> with callbacks to get cross-site data is great example of how an
> excessively simple security model has resulted in developers being
> forced to use dangerous methods to accomplish their goals.
> Thanks,
> Kris
>
>



Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Julian Reschke
In reply to this post by Laurens Holst-2

Laurens Holst wrote:

> I don’t really see how POST is less harmful than DELETE. POST (if used
> in a REST-y way) can be used to wreak serious havoc (e.g. spam messages,
> overload server data capacity, post viruses, adding new super user
> accounts for the hacker, change settings such as passwords, influence
> poll results).
>
> Additionally, there are a great number of sites that are using the HTTP
> POST method for operations that would be more suitable for PUT and
> DELETE. The reason that this happens is probably HTML’s fault, because
> it only supports GET and POST, crippling the functionality that HTTP
> provides. Non-REST webservices protocols such as XML-RPC and SOAP also
> exclusively use POST.
>
> If XDR only supports GET and POST, it encourages sites to use POST to
> implement delete functionality and abuse the HTTP protocol because that
> is the only way they can get the functionality they desire to work.
> Basically, you’re boycotting REST in favour of SOAP.
>
> So, I do not see much benefit in the decision to disallow DELETE but
> allowing POST.

As a matter of fact, it would be harmful.

> I don’t think you’re really keeping users very safe that way. To quote
> an example, PHP is by default configured to send session credentials
> entirely through sessionID link parameters when cookies are not
> available, and thus don’t need cookies or authentication headers. And on
> many sites (e.g. phpBB-based forums), communication with the server
> (including deletion) happens exclusively through GET and POST requests.
> XDR’s restrictions on methods and credentials will not do these sites
> any good. Rather, they encourage even more sites to work around the
> restrictions.
>
> Additionally, if you really want to keep sites safe in this manner, you
> should disallow cross-site POST requests for both XDR and HTML forms.
> Otherwise, there is already a breach in the safety, POST is equally
> suitable for ‘public’ data as DELETE and PUT are. You should allow those
> methods, so that developers can at least provide a proper REST API and
> are not forced to overload POST like XML-RPC and SOAP and friends do.

Absolutely.

> Then create a whitelist, with at least the Accept-* headers on it. They
> are clearly defined, and it is doubtful that they are used in a
> different manner than described. The use cases are clear and plenty.

Correct.

> I don’t think it is up to Microsoft to decide that content negotiation
> is useless and that part of HTTP should not be supported even though it
> poses no security risk. Part of the reason that content type negotiation
> is reasonably uncommon in the wild is partially because of mediocre
> support by user agents. By the way, content language and content
> encoding negotiation is pretty common, actually.
>
> Content type negotiation is very useful and I think a very nice feature
> of HTTP, offering different ‘views’ (e.g. RSS or RDF, SVG or PNG) of the
> same content. With XHR it was finally possible to use content type
> negotiation in a good way in browsers, please don’t break it again in XDR.

Yes.

Not that it really matters -- even if Conneg wasn't useful, it's always
bad if specs start profiling the base protocol without very good reasons.

> ...

BR, Julian

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Collin Jackson-2
In reply to this post by Eric Lawrence-4

On Fri, Mar 14, 2008 at 4:59 PM, Eric Lawrence
<[hidden email]> wrote:

>  Maciej Stachowiak [[hidden email]] asked, in part:
> > I am also not sure if a DNS rebound cross-domain XHR with
> > POST or some other method can do anything that you can't
> > do with a cross-domain form submission. You can set custom
> > headers, but that seems unlikely to make the difference between
> > safe and unsafe.
>
> It's certainly a possibility.  For instance, consider a device which
> accepts SOAP XML as input  The designers of the device were wise
> to note that a cross-domain form submission could be made
> (encType = text/plain) that contains XML-formatted content, and thus
> they devised an anti-CSRF mechanism of rejecting requests that do
> not bear a proper SOAPAction header.  Such restriction properly blocks
> CSRF via HTML forms, but is put at risk if a cross-domain XHR
> request is able to send arbitrary headers.

The only servers that need worry about DNS rebinding attacks are those
behind firewalls and those that care about the IP address of the
client. These servers already need to defend themselves against DNS
rebinding attacks using the basic same-site XMLHttpRequest
functionality, by checking the Host header or using a DNS firewall
such as dnswall. The addition of cross-site XMLHttpRequest does not
increase the attack surface for the DNS rebinding attacker beyond that
of same-site XMLHttpRequests because same-site XMLHttpRequests can set
headers, including SOAPAction.

The access control specification
<http://dev.w3.org/2006/waf/access-control/> recommends the Host
header checking technique. This technique works because the known
socket-level DNS rebinding vulnerabilities in browsers have been
patched by Adobe and Sun.

Collin Jackson

Reply | Threaded
Open this post in threaded view
|

RE: IE Team's Proposal for Cross Site Requests

Sunava Dutta
In reply to this post by Maciej Stachowiak

Maciej Stachowiak [[hidden email]] said:
<<But not exactly identical, since forms can't be used to POST XML content with a proper MIME type cross-domain.>>

You're right-- setting an arbitrary request content-type is a capability not present in HTML forms today.  While we believe that this is a minimal increase in attack surface, we agree that it's worth considering whether or not such capability should be removed.

If removed, all XDR POST requests could be sent with:

                Content-Type: text/plain; charset=UTF-8

Servers would then be flexible in interpreting the data in the higher-level format they expect (JSON, XML, etc).

Maciej Stachowiak [[hidden email]] asked:
<<What I'd like to understand is whether there are security benefits to the API and protocol differences.>>

We believe that the XDR proposal represents a simpler mechanism for enabling the most commonly requested types of cross-domain access.  We believe that such simplicity will lead to improved security in practical implementations by browsers.

There are many threats against a cross-domain communication mechanism, so we believe the simplicity of XDR makes it more suitable than attempting to plumb cross-domain capabilities into the existing XHR object.  In particular, we are concerned that attempting to introduce new restrictions/added complexity on an XHR object when it is used in a cross-domain manner will result in a confusing programming model for the web developer.


-----Original Message-----
From: Maciej Stachowiak [mailto:[hidden email]]
Sent: Saturday, March 15, 2008 1:23 PM
To: Eric Lawrence
Cc: Web API WG (public); [hidden email]; Sunava Dutta; Chris Wilson; Zhenbin Xu; Gideon Cohn; Sharath Udupa; Doug Stamper; Marc Silbey
Subject: Re: IE Team's Proposal for Cross Site Requests


On Mar 14, 2008, at 4:59 PM, Eric Lawrence wrote:

>
> =====
>
> Maciej Stachowiak [[hidden email]] asked:
> <<How does this compare to the Cross-Site Extensions for
> XMLHttpRequest standard that is being developed by Web API and Web
> App Formats (and as implemented in Firefox betas)? From Apple's
> point of view we would like to have a unified standard in this area.>>
>
> We believe that the XDR proposal exposes a smaller surface of attack
> than the Cross-Site extensions for XHR.  Specifically, it can be
> demonstrated that the capabilities exposed by XDR are virtually
> identical to the capabilities exposed by existing HTML tags.  The
> one exception (obviously) is that the XDR object allows examination
> of response bodies cross-domain if and only if the server explicitly
> indicates that such access is permissible via the
> XDomainRequestAllowed header.

But not exactly identical, since forms can't be used to POST XML
content with a proper MIME type cross-domain. This is actually more
restricted in XHR2+AC.

> =====
>
> Maciej Stachowiak [[hidden email]] asked, in part:
> <<I am also not sure if a DNS rebound cross-domain XHR with POST or
> some other method can do anything that you can't do with a cross-
> domain form submission. You can set custom headers, but that seems
> unlikely to make the difference between safe and unsafe.>>
>
> It's certainly a possibility.  For instance, consider a device which
> accepts SOAP XML as input  The designers of the device were wise to
> note that a cross-domain form submission could be made (encType =
> text/plain) that contains XML-formatted content, and thus they
> devised an anti-CSRF mechanism of rejecting requests that do not
> bear a proper SOAPAction header.  Such restriction properly blocks
> CSRF via HTML forms, but is put at risk if a cross-domain XHR
> request is able to send arbitrary headers.

On the other hand, if the anti-CSRF mechanism were checking for a
proper XML Content-Type instead of looking for a SOAPAction header,
XDR would be more vulnerable than XHR2+AC. If the server also checks
the Host header, then XHR2+AC would be completely safe (since no DNS
rebinding attack is then possible).

In any case, it seems like this could be addressed through a strict
whitelist of allowed request headers, including such critical headers
as Accept and Accept-Language but ruling out SOAPAction. Or XHR2+AC
could even block all custom headers on cross-site requests. Let's take
that point as negotiable. Allowed methods are also a negotiable point.
These issues both address what may be customized on the request, but
the most obvious incompatibilities between XDomainRequest and XHR2+AC
are the API and protocol.

What I'd like to understand is whether there are security benefits to
the API and protocol differences. Or if not, if there is any other
reason to prefer the Microsoft-proposed API and protocol to the
current draft standards. Can anyone from Microsoft address that point?

Regards,
Maciej


Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Thomas Roessler

On 2008-03-17 14:29:54 -0700, Sunava Dutta wrote:

> If removed, all XDR POST requests could be sent with:
>
>                 Content-Type: text/plain; charset=UTF-8

> Servers would then be flexible in interpreting the data in the
> higher-level format they expect (JSON, XML, etc).

Why text/plain, as opposed to, say,
application/x-www-form-urlencoded?

Or even some other content type?  I'm worried that you're suggesting
some pretty intrusive profiling of HTTP here, effectively
*requiring* content sniffing to deal with any kind of form content.

That creates its own bit of complexity and possibilities for
insecurities down the road.

I'd rather we deal with the added attack surface due to being able
to POST properly labelled XML content than introducing another
divergence into how HTTP headers are interpreted by Web
applications.

--
Thomas Roessler, W3C  <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Anne van Kesteren-2
In reply to this post by Sunava Dutta

On Mon, 17 Mar 2008 22:29:54 +0100, Sunava Dutta  
<[hidden email]> wrote:
> There are many threats against a cross-domain communication mechanism,  
> so we believe the simplicity of XDR makes it more suitable than  
> attempting to plumb cross-domain capabilities into the existing XHR  
> object.  In particular, we are concerned that attempting to introduce  
> new restrictions/added complexity on an XHR object when it is used in a  
> cross-domain manner will result in a confusing programming model for the  
> web developer.

Could you elaborate on why you consider the proposed model to be confusing  
for Web developers? It's in fact as simple as:

   var client = new XMLHttpRequest()
   client.onreadystatechange = function() { ...}
   client.open("GET", "http://cross-site.example.org/resource")
   client.send()

Indeed, as complex as normal usage of XMLHttpRequest. The model proposed  
doesn't just solve it for XMLHttpRequest, it can also be used for  
cross-site XSLT:

   <?xml-stylesheet
     href="http://cross-site.example.org/transform"
     type="application/xslt+xml"?>

Again, no changes required in the way you initiate the request. The  
server-side is not much more complex than what has been proposed by  
Microsoft although a preflight request has to be handled by the server to  
ensure that the server is ok with custom methods, a request entity body,  
etc.


--
Anne van Kesteren
<http://annevankesteren.nl/>
<http://www.opera.com/>

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Julian Reschke
In reply to this post by Thomas Roessler

Thomas Roessler wrote:

> On 2008-03-17 14:29:54 -0700, Sunava Dutta wrote:
>
>> If removed, all XDR POST requests could be sent with:
>>
>>                 Content-Type: text/plain; charset=UTF-8
>
>> Servers would then be flexible in interpreting the data in the
>> higher-level format they expect (JSON, XML, etc).
>
> Why text/plain, as opposed to, say,
> application/x-www-form-urlencoded?
>
> Or even some other content type?  I'm worried that you're suggesting
> some pretty intrusive profiling of HTTP here, effectively
> *requiring* content sniffing to deal with any kind of form content.
>
> That creates its own bit of complexity and possibilities for
> insecurities down the road.
>
> I'd rather we deal with the added attack surface due to being able
> to POST properly labelled XML content than introducing another
> divergence into how HTTP headers are interpreted by Web
> applications.

+1.

Removing the ability to properly specify the content type is a bug, not
a feature.

(BTW: the same applies to other kinds of profiling, such as by HTTP
method name)

BR, Julian


Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Maciej Stachowiak
In reply to this post by Sunava Dutta


On Mar 17, 2008, at 2:29 PM, Sunava Dutta wrote:

> Maciej Stachowiak [[hidden email]] said:
> <<But not exactly identical, since forms can't be used to POST XML  
> content with a proper MIME type cross-domain.>>
>
> You're right-- setting an arbitrary request content-type is a  
> capability not present in HTML forms today.  While we believe that  
> this is a minimal increase in attack surface, we agree that it's  
> worth considering whether or not such capability should be removed.
>
> If removed, all XDR POST requests could be sent with:
>
>                Content-Type: text/plain; charset=UTF-8
>
> Servers would then be flexible in interpreting the data in the  
> higher-level format they expect (JSON, XML, etc).

I think encouraging more content sniffing of text/plain on the server  
side is likely to increase, not reduce attack surface.

> Maciej Stachowiak [[hidden email]] asked:
> <<What I'd like to understand is whether there are security benefits  
> to the API and protocol differences.>>
>
> We believe that the XDR proposal represents a simpler mechanism for  
> enabling the most commonly requested types of cross-domain access.  
> We believe that such simplicity will lead to improved security in  
> practical implementations by browsers.
>
> There are many threats against a cross-domain communication  
> mechanism, so we believe the simplicity of XDR makes it more  
> suitable than attempting to plumb cross-domain capabilities into the  
> existing XHR object.  In particular, we are concerned that  
> attempting to introduce new restrictions/added complexity on an XHR  
> object when it is used in a cross-domain manner will result in a  
> confusing programming model for the web developer.

So far I have not heard any *specific* security risks of the Access-
Control model as compared to XDR, at least none that have held up to  
closer scrutiny. Is Microsoft aware of any specific such risks, as  
opposed to general concerns?

Certainly simplicity of client-side authoring, server-side authoring  
and implementation are worth discussing as well, but I think the  
approaches are similar enough that simplicity in itself is not a major  
security issue.

Regards,
Maciej


Reply | Threaded
Open this post in threaded view
|

RE: IE Team's Proposal for Cross Site Requests

Sunava Dutta
In reply to this post by Sunava Dutta

In response to my comments:

 

> In particular, we are concerned that attempting to introduce new

> restrictions/added complexity on an XHR object when it is used in a

> cross-domain manner will result in a confusing programming model for

> the web developer.

 

... Anne van Kesteren [hidden email] observed:

 

<<Could you elaborate on why you consider the proposed model to be confusing for Web developers? It's in fact as simple as:

 

   var client = new XMLHttpRequest()

   client.onreadystatechange = function() { ...}

   client.open("GET", "http://cross-site.example.org/resource")

   client.send()

>> 

 

The potential for confusion stems from the limitations which must be introduced for security purposes. 

 

See http://www.w3.org/TR/access-control/#security for the list that are currently identified.  The specification specifically notes that:

 

      Hosting specifications should limit the request headers an author

      can set and get, and forbidding setting user credentials through

      any API defined in the hosting specification.

 

Thus, the Access Control specification itself suggests that an XHR object should have different behavior when used in "cross domain mode." 

 

Specifically, this section modifies the expected behavior of the AddHeader() method, and the availability of the "user" and "password" parameters on the Open() method. In the current form, I do not see a definition for the expected behavior of the user agent if the script attempts to call AddHeader() with a forbidden value or Open() with forbidden parameters.  If such use immediately throws an exception, this would be simpler, but if not, then user-agents must take particular care to strip such data upon redirection.

 

Continuing on in that section, the spec requires that user-agents:

 

      Not reveal whether the requested resource exists, until access

      has been granted.

 

This requires that the HEADERS_RECEIVED state must either never be reached for a cross-origin request, or it must be delayed until any access control list in the entity is evaluated.  Hence, eventing behaves differently when a request is cross-origin.

 

Further, the spec requires that user-agents:

 

      Not inappropriately expose any trusted data of the response, such as

      cookies and HTTP header data.

 

This requires that getAllResponseHeaders() and getResponseHeader() should behave differently.

 

Therefore, the Access-Control spec itself demands a different programming model than that provided by the existing XHR implementation.  This seems likely to be a source of confusion for web developers.  The complexity introducing a "cross-domain mode" into an object which was not designed with this in mind seems likely to lead to implementation flaws.

 

The Access-Control spec notes that:

 

      Authors are to ensure that GET requests on their applications have

      no side effects. If by some means an attacker finds out what applications

      a user is associated with, it might "attack" these applications with GET

      requests that can effect [sic] the user's data (if the user is already

      authenticated with any of these applications by means of cookies or HTTP authentication).

 

I'm concerned that this note suggests that the spec fails to meet its own requirement #2:

 

      Must not require content authors or site maintainers to implement new or

      additional security protections to preserve their existing level of security

      protection.

 

...As cookies and HTTP authentication are commonly used security protections yet they are sent by cross-origin requests.  CSRF is already a growing problem in the wild, and the Access-Control mechanism requires that web developers understand extremely subtle aspects of the security model to keep their sites secure.

 

Considering the potential complexity of the ALLOW and DENY rules, I'm concerned that the spec also fails requirement #13:

 

      Should reduce the risk of inadvertently allowing access when it is not intended.

      This is, it should be clear to the content provider when access is granted and

      when it is not.

 

We've seen it happen before, access control is difficult to understand (web dev/service provider) and maintain, especially when the number of rules grow.  It usually ends up being poorly maintained and exploited. (e.g. Wildcarding) Furthermore, it may not be desirable to expose the Access Control Rules of a server, say a Bank, to the public. This is information disclosure and is currently a problem with the ACL model.

 

Beyond simple mistakes on the part of the developer of the ALLOW / DENY list, prior implementation experience on the part of other vendors suggests that proper parsing of policy directives is non-trivial and a source of bugs.

 

-----

 

Maciej Stachowiak [[hidden email]] noted:

<<I think encouraging more content sniffing of text/plain on the server side is likely to increase, not reduce attack surface.>>

 

If a service is defined as accepting one format, it need only accept that format, and can reject anything else.  Sniffing is not recommended or desirable.

 

Remember, even if you allow the Content-Type to be specified by the caller, the server has NO guarantee that the Content-Type specified is an accurate description of the POST body content.  To remain secure, servers MUST be robust in the face of malformed input. 

 

-----

 

Maciej Stachowiak [[hidden email]] noted:

<<So far I have not heard any *specific* security risks of the Access- Control model as compared to XDR, at least none that have held up to closer scrutiny. Is Microsoft aware of any specific such risks, as opposed to general concerns?>>

 

The Security Worries section here: http://wiki.mozilla.org/Cross_Site_XMLHttpRequest and the Security section here: http://www.w3.org/TR/access-control/#security  describe some of the concerns related to the Access-Control model.  We believe that the XDR model effectively mitigates the concerns described. 

 

-----

 

Maciej Stachowiak [[hidden email]] noted:

<<Certainly simplicity of client-side authoring, server-side authoring and implementation are worth discussing as well, but I think the approaches are similar enough that simplicity in itself is not a major security issue.>>

 

While simplicity alone obviously is no guarantee of security, design complexity almost always leads to implementation bugs.  Implementation bugs in access control mechanisms lead to security bugs.

 

-----

 

 

 

From: Sunava Dutta
Sent: Thursday, March 13, 2008 9:07 PM
To: Sunava Dutta; Web API WG (public); [hidden email]
Cc: Eric Lawrence; Chris Wilson; Zhenbin Xu; Gideon Cohn; Sharath Udupa; Doug Stamper; Marc Silbey
Subject: RE: IE Team's Proposal for Cross Site Requests

 

Adding the WAF group since they have also been working on a similar technology.

 

From: Sunava Dutta
Sent: Thursday, March 13, 2008 8:47 PM
To: Sunava Dutta; Web API WG (public)
Cc: Eric Lawrence; Chris Wilson; Zhenbin Xu; Gideon Cohn; Sharath Udupa; Doug Stamper; Marc Silbey
Subject: IE Team's Proposal for Cross Site Requests

 

Purpose

XDR helps web developers to create secure mashups, replacing less secure or non-performant approaches, including SCRIPT SRC’ing content or IFRAME injection.

 

Microsoft would like to submit XDR to the W3C for standardization so that other browsers can benefit from this technology.

 

 

XDomainRequest (XDR)

Table of Contents

1.0   Summary

2.0   Background: Overview of how XDR allows cross site requests

3.0   API Documentation: Lists the programming interface/methods/properties

4.0   Security Model Flowchart: Highlights the security checks that IE8 makes for an XDR Request.

5.0   Sample Site and Script: For developers wishing to create an XDR page.

6.0   Developer Benefits of using XDR: Covers XDR’s strengths by demonstrating XDR’s goals of security and simplicity.

7.0   Developer Release Notes: A short bulleted list of issues developers should we aware of when using the object and a summary of what XDR cannot do.

1.0 Summary

With Cross Domain Request (XDR) developers can create cross site data aggregation scenarios. Similar to the XMLHttpRequest object  but with a simpler programming model, this request, called XDomainRequest, is an easy way to make anonymous requests to third party sites that support XDR and opt in to making their data available across domains. Three lines of code will have you making basic cross site requests. This will ensure data aggregation for public sites such as blogs etc will be simple, secure and fast. XDR is an approach designed from the grounds up with a focus on security. We understand the current cross domain XMLHTTPRequest proposal and recognize its ability to provide a broader set of services particularly around declarative auditing for access control based scenarios and authenticated connections. It does however come at the risk of more complexity and surface area of attack. While these are certainly compelling scenarios we realize that existing implementations have bugs (linked 1, 2), some of which are resolved from the past like TOUCTOU and others like DNS Rebinding remain mostly unaddressed. In addition, maintaining configuration is challenging post deployment as Flash has encountered (wildcarding) in the past. The IE team is not comfortable implementing a feature with a high surface area of attack and open/incoming security issues and proposes XDR as a safer alternative.

 

2.0 Background

 

 

Browsers enforce the same site origin policy, which blocks web pages from accessing data from another domain. Websites often work around this policy by having their server request content from another site’s server in the backend, thus circumventing the check within the browser.

 


 


 

 

 

 

 

Text Box: Figure 1 – IE7 and below need to make a request to the mashup server which then needs to be proxied to the web server.

 

In IE8 web pages can simply make a cross domain data request within the browser using the new XDomainRequest object instead of a server-to-server requests.

Cross domain requests require mutual consent between the webpage and server. You can initiate a cross domain request in your webpage by creating a xdomainrequest object off the window object and opening a connection to a particular domain. The browser will request data from the domain’s server by sending a XDomainRequest: 1 header. It will only complete the connection if the server responds with a XDomainRequestAllowed header with the value “1” for true.

 

For example, a server’s asp page includes the following response header:

Response.AppendHeader("XDomainRequestAllowed","1");

 

 

 

Security note: Cross domain requests are anonymous to protect user data, which means that servers cannot easily find out who is requesting data. As a result, you only want to request and respond with cross domain data that is not sensitive or personally identifiable.

 


3.0 API Documentation

 

 

Methods

Once you create a xdomainrequest object, you can use the open() method to open a connection with a domain’s server. This method supports the GET and POST HTTP methods and takes the URL to connect to as a parameter. Once you’ve opened a connection, you can use the send() method to send a data string to the server for processing if needed. For example:

 

// 1. Create XDR object

xdr = new XDomainRequest();

 

//2. Open connection with server using POST method

xdr.open(“POST”, “http://www.contoso.com/xdr.txt”)

 

//3. Send string data to server

xdr.send(“data to be processed”)

 

XDR also has an abort() method to cancel an active request, which takes no parameters. Data is not available on an abort.

 

Properties

·         responseText - After the server responds, you can retrieve the data string through the read-only responseText property.

·         timeout - You can use the timeout property to set or retrieve the number of milliseconds the browser should wait for a server to respond.   IE defaults to no timeout if this property is not explicitly set. If the request times out, data is not available.

·         contentType – If you are posting data to the server, use the contentType property to define the content type string that will be sent to the server. If you are using a GET then this property will allow you to read the content type.

 

Events

XDR has the following events:

·         onerror – this event fires when there is an error and the request cannot be completed. For example, the network is not available

·         ontimeout – this event fires when the request reaches its timeout as defined by the above timeOut property. If the request times out data is not available.

·         onprogress – this event fires while the server responds to the request by streaming data back to the browser.

·         onload – this event fires when the cross domain request is complete and data is available.

 

Security note: Cross domain requests can only be sent and received from a web page to URLs in the following IE zones. We discourage Intranet sites from making XDR data available to help prevent intranet data from leaking to malicious Internet sites.

 

 

 

Webpage equests data from a URL in the following zone:

 

 

Local

Intranet

Trusted (Intranet)

Trusted (Internet)

Internet

Restricted

Webpage is in the following zone:

Local

Allow

Allow

Allow

Allow

Allow

Block

Intranet

Block

Allow

Allow

Allow

Allow

Block

Trusted (Intranet)

Block

Allow

Allow

Allow

Allow

Block

Trusted (Internet)

Block

Block

Block

Allow

Allow

Block

Internet

Block

Block

Block

Allow

Allow

Block

Restricted

Block

Block

Block

Block

Block

Block

 

Security note: When using these XDR, safely handling data provided by another web application is a critical operation.

 

For instance, the response could be parsed directly by Javascript, or it could be evaluated with a freely available JSON parser (see http://www.json.org/) or it could be inserted into a DOM as static text (using .innerText).

 

 

 

Server Side

The browser will request data from the domain’s server by sending a XDomainRequest: 1 header. It will only complete the connection if the server responds with an XDomainRequestAllowed header with the value “1” for true.

For example, a server’s asp page includes the following response header:

Response.AppendHeader("XDomainRequestAllowed","1");

This can be done in IIS, for example, using an ASP.NET page. The line of code below can be embedded in your ASP page to return the header.

 

<<% Response.AddHeader  "XDomainRequestAllowed","1" %>Data

 

 

4.0 Security Model Flowchart

XDR Flowchart

5.0 Sample Site and Script

 

Please refer to the AJAX Hands on Labs on MSDN for demo script. This will need to be set up on your machine from the resource files.

 

6.0 Other Developer Benefits of Using XDR

1.        Simple development model.

a.        On the server, the server operator must simply add one new header to his HTTP response indicating that cross-domain sources may receive the data.  HTTP Headers can be added by any CGI-style process (PHP/ASPNET/etc) or by the web server software (Apache/IIS/etc) itself.

b.        On the client, the XDR object is all about cross-domain-requests.  Because XDR is a new object we are not forced to “bolt on” cross-domain security.  For example, XDR has no means of adding a custom header, because custom headers are dangerous for cross-domain security as the current web model does not expect a custom header being sent across domains. We’ve encountered experiences when web applications in the past if encountering a custom header using XHR assume it’s coming from the same site.

 

2.        Provably secure

a.        The XDR security model is simple.  The client sends a request that clearly identifies its cross-domain nature, and the server must respond in kind for the Same-Origin-Policy to be relaxed such that the client can read the response.  If the server does not set the response header (a “non-participating” server), the client script is not permitted to read the response or determine anything about the target server.

 

b.        XDR is very tightly scoped to minimize the risk of increasing security exposure of the browser.

1.        Specifically, any request sent by XDR could also be emitted by a properly coded HTML FORM object.  Hence, any “non-participating” web server put at risk by XDR is also at risk from simple HTML.

 

Note: The only additional exposure XDR adds is the ability of the client to set a specific Content-Type header.

 

2.        As XDR strips all credentials and cookies, it prevents even less attack surface for use in a Cross-Site-Request-Forgery (CSRF) attack than a HTML Form.

 

c.        XDR attempts to block cross-zone/protocol requests, an ASR which exceeds that undertaken elsewhere in the browser (e.g. SCRIPT SRC) due to compatibility concerns.

 

3.        Improved Access  Control “Locality”

a.        Unlike policy file-based security, the XDR handshake is a part of the HTTP request and response.  This means that XDR is not at risk from DNS-Rebinding or Time-of-Check-Time-of-Use attacks.

b.        Policy files must be located in a particular location on the server, which may cause operational problems for users with limited permissions on the server.  For example, consider the shared hosting case, where only one admin may write to the server root, but many users have permissions to write to sub-folders.  The users must petition the admin for an update to the policy file.

 

4.        Access-Control Flexibility

a.        As Access-Control is based on a per-response basis, the server may choose to allow or deny access based upon any criteria desired.  For instance, Referer of client, time of day, number of requests per hour, etc, etc.

b.        The XDR security model prevents attackers from easily determining the access control rules of the server.  The server may keep their rules as a trade secret.

 

7.0 Developer Release Notes

·         Not yet available across browsers; not a W3C standard.

·         Services must be explicitly coded to operate with XDR. 

·         As HTTP Methods are deliberately limited, standard REST-based interop is not possible.

·         As credentials are not provided by the browser, the client must transmit them in the request body.  This typically should not be a problem but this could prevent use of the HttpOnly attribute on cookies that must be sent for credentials.

·         The XDR handshake is HTTP-specific and cannot be directly translated for reuse in other protocols or situations (E.g. raw socket access). 

 

 

 

--
Sunava Dutta
Program Manager (AJAX) - Developer Experience Team, Internet Explorer

One Microsoft Way, Redmond WA 98052
TEL# (425) 705-1418

FAX# (425) 936-7329

 


image001.png (41K) Download Attachment
image004.png (36K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

John Panzer-3
In reply to this post by Sunava Dutta

Sunava Dutta wrote:

> Maciej Stachowiak [[hidden email]] said:
> <<But not exactly identical, since forms can't be used to POST XML content with a proper MIME type cross-domain.>>
>
> You're right-- setting an arbitrary request content-type is a capability not present in HTML forms today.  While we believe that this is a minimal increase in attack surface, we agree that it's worth considering whether or not such capability should be removed.
>
> If removed, all XDR POST requests could be sent with:
>
>                 Content-Type: text/plain; charset=UTF-8
>
> Servers would then be flexible in interpreting the data in the higher-level format they expect (JSON, XML, etc).
>  
This assumes that the server can know a priori what type they expect.  
This isn't necessarily the case for e.g., AtomPub servers.  Or are they
supposed to guess the content type from the content body?  That's surely
a recipe for security disasters down the road...


Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Maciej Stachowiak
In reply to this post by Sunava Dutta

On Mar 17, 2008, at 7:52 PM, Sunava Dutta wrote:

 
Maciej Stachowiak [[hidden email]] noted:
<<I think encouraging more content sniffing of text/plain on the server side is likely to increase, not reduce attack surface.>>
 
If a service is defined as accepting one format, it need only accept that format, and can reject anything else.  Sniffing is not recommended or desirable.

Such a service should reject an incorrect MIME type, which text/plain would be for XML.


Remember, even if you allow the Content-Type to be specified by the caller, the server has NO guarantee that the Content-Type specified is an accurate description of the POST body content.  To remain secure, servers MUST be robust in the face of malformed input. 

However, sniffing in text/plain is a whole different ball of wax.


  
Maciej Stachowiak [[hidden email]] noted:
<<So far I have not heard any *specific* security risks of the Access- Control model as compared to XDR, at least none that have held up to closer scrutiny. Is Microsoft aware of any specific such risks, as opposed to general concerns?>>
 
The Security Worries section here: http://wiki.mozilla.org/Cross_Site_XMLHttpRequest and the Security section here:http://www.w3.org/TR/access-control/#security  describe some of the concerns related to the Access-Control model.  We believe that the XDR model effectively mitigates the concerns described. 

Do you have any specifics? Which of those items, in particular, do you think represent security vulnerabilities in XHR2+AC? Which are addressed by XDR? I can do this analysis myself if necessary, but if Microsoft is making the claim that XDR is more secure and that you believe XHR2+AC has security vulnerabilities, I think you should provide specific evidence to back up these claims.

(Note that these are both lists of issues that are believed to be adequately addressed, so it is not immediately obvious which items you believe are vulnerabilities.)

 
Maciej Stachowiak [[hidden email]] noted:
<<Certainly simplicity of client-side authoring, server-side authoring and implementation are worth discussing as well, but I think the approaches are similar enough that simplicity in itself is not a major security issue.>>
 
While simplicity alone obviously is no guarantee of security, design complexity almost always leads to implementation bugs.  Implementation bugs in access control mechanisms lead to security bugs.

That is true. But based on my experience writing the original implementation of XMLHttpRequest for WebKit, and my review of the spec, I do not think XHR2+AC rises to the level of complexity that is highly likely to lead to implementation bugs.

Regards,
Maciej

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Thomas Roessler
In reply to this post by Sunava Dutta

On 2008-03-17 19:52:18 -0700, Sunava Dutta wrote:

> The Access-Control spec notes that:

>       Authors are to ensure that GET requests on their
>       applications have no side effects. If by some means an
>       attacker finds out what applications a user is associated
>       with, it might "attack" these applications with GET
>       requests that can effect [sic] the user's data (if the user
>       is already authenticated with any of these applications by
>       means of cookies or HTTP authentication).

> I'm concerned that this note suggests that the spec fails to meet
> its own requirement #2:

>       Must not require content authors or site maintainers to
>       implement new or additional security protections to
>       preserve their existing level of security protection.

> ...As cookies and HTTP authentication are commonly used security
> protections yet they are sent by cross-origin requests.  CSRF is
> already a growing problem in the wild, and the Access-Control
> mechanism requires that web developers understand extremely
> subtle aspects of the security model to keep their sites secure.

I'm not sure how subtle the GET vs POST aspect really is -- after
all, Web developers who use GET with side effects without employing
mitigating techniques will already expose themselves to:

- any clients or proxies that assume that GET is idempotent

- attackers' ability to place pretty arbitrary GET requests with
  HTTP authentication headers and cookies, cross-site

That's not new, and it's not made worse in any significant way by
the access-control spec.

--
Thomas Roessler, W3C  <[hidden email]>

Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Henri Sivonen
In reply to this post by John Panzer-3

On Mar 18, 2008, at 06:18, John Panzer wrote:

> Sunava Dutta wrote:
>> Maciej Stachowiak [[hidden email]] said:
>> <<But not exactly identical, since forms can't be used to POST XML  
>> content with a proper MIME type cross-domain.>>
>>
>> You're right-- setting an arbitrary request content-type is a  
>> capability not present in HTML forms today.  While we believe that  
>> this is a minimal increase in attack surface, we agree that it's  
>> worth considering whether or not such capability should be removed.
>>
>> If removed, all XDR POST requests could be sent with:
>>
>>                Content-Type: text/plain; charset=UTF-8
>>
>> Servers would then be flexible in interpreting the data in the  
>> higher-level format they expect (JSON, XML, etc).
>>
> This assumes that the server can know a priori what type they  
> expect.  This isn't necessarily the case for e.g., AtomPub servers.  
> Or are they supposed to guess the content type from the content  
> body?  That's surely a recipe for security disasters down the road...


In general, the XDR design mindset seems to assume that the server-
side implementation will jump through whatever hoops placed by the  
browser.

For contrast, let's consider how cross-site XHR didn't require  
excessive hoop jumping in the case of the Validator.nu Web service  
API[1].

I did not design this Web service API for a nightly of Firefox 3 or  
any browser. I designed the API for non-browser apps (e.g. blogging  
systems written in Python, Ruby or Java in a different process)  
applying what I thought to represent the best practices in RESTful Web  
service design. The idea of allowing cross-site XHR came as an  
afterthought.

It turned out that using GET for preflight sucked, and I sent feedback  
to the WG. However, after the spec changed to use OPTIONS, it was  
super-easy. The changes were confined to one request dispatching  
servlet class. The main controller class didn't need any changes. The  
filters that enable form-based uploads and compression didn't need any  
changes. Note that the same URI entry point is used for browser-based  
HTML and XHTML UI, form-based upload and the Web service API. Form  
POSTs and Web service POSTs are discriminated based on Content-Type.

Considering how non-disruptive the access-control-related changes were  
in a system that (I like to think) is a well-designed RESTful system,  
I think the access-control spec as it now stands (with OPTIONS) is  
pretty well designed. (Granted, Validator.nu is a bit abnormal in the  
sense that it doesn't have API keys, login or stuff like that, because  
the API is knowingly designed without a requirement to monetize the  
API.)

P.S. Using postMessage + same-site XHR as a surrogate for cross-site  
XHR would be bad in the Validator.nu case. Instead of adding a bit of  
OPTIONS handling in one place, I'd have to set aside a bit of URI  
space for serving an iframeable JS API from the same host name.  
(Currently, static content is served from a different host name by  
Apache without all the servlet stuff in between.) Moreover, I'd  
actually have to write the iframeable JS API. Even worse, that API  
would have to transport the document represented by a DOM tree across  
the postMessage boundary instead of merely passing the DOM tree to XHR  
for automatic serialization magic.

[1] http://wiki.whatwg.org/wiki/Validator.nu_Web_Service_Interface
--
Henri Sivonen
[hidden email]
http://hsivonen.iki.fi/



Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Laurens Holst-2
In reply to this post by Sunava Dutta
Sunava Dutta schreef:

> Maciej Stachowiak [[hidden email]] said:
> <<But not exactly identical, since forms can't be used to POST XML content with a proper MIME type cross-domain.>>
>
> You're right-- setting an arbitrary request content-type is a capability not present in HTML forms today.  While we believe that this is a minimal increase in attack surface, we agree that it's worth considering whether or not such capability should be removed.
>
> If removed, all XDR POST requests could be sent with:
>
>                 Content-Type: text/plain; charset=UTF-8
>
> Servers would then be flexible in interpreting the data in the higher-level format they expect (JSON, XML, etc).
>  
What? No, you should send the requests with no Content-Type at all, as
the Content-Type is not known.

Or, if you really do not want to increase the attack surface, you should
always send the content type application/x-www-form-urlencoded, and only
allow request entities constructed through an API. Because servers only
expect x-www-form-urlencoded and not text/plain, and servers might have
parsing issues if the POST body is malformed, both leading to changes
from what is currently possible with HTML and thus, security risks.

Note by the way that cross-site XHR basically works on a model that
normally ONLY allows GET requests (addressing my concerns on POST in my
previous mail), contrary to XDR which allows GET and POST. So this issue
you’re having does not apply to XHR. 1-0 for XHR.

Cross-site XHR has a special opt-in method to allow POST, DELETE and PUT
requests as well, when it is needed. This will not put any existing
sites at risk, because it’s opt-in (unlike XDR’s POST), the server needs
to EXPLICITLY allow them for a specific resource. Allowing these methods
at all is necessary to prevent sites sites from overloading the GET
request in order to acquire their desired functionality. 2-0 for XHR.


~Grauw

--
Ushiko-san! Kimi wa doushite, Ushiko-san nan da!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Laurens Holst, student, university of Utrecht, the Netherlands.
Website: www.grauw.nl. Backbase employee; www.backbase.com.


lholst.vcf (196 bytes) Download Attachment
smime.p7s (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: IE Team's Proposal for Cross Site Requests

Laurens Holst-2
Laurens Holst schreef:
> Or, if you really do not want to increase the attack surface, you
> should always send the content type application/x-www-form-urlencoded,
> and only allow request entities constructed through an API. Because
> servers only expect x-www-form-urlencoded and not text/plain, and
> servers might have parsing issues if the POST body is malformed, both
> leading to changes from what is currently possible with HTML and thus,
> security risks.

Sorry, apparantly this is a misconception of mine, using
encoding="text/plain" you can apparantly already send arbitrary
requests. So ignore this paragraph please :). The rest does still apply.

By the way, I do not see how requiring servers to ignore the request
entity content type and forcing them to do content sniffing makes things
more secure, instead of less.


~Grauw

--
Ushiko-san! Kimi wa doushite, Ushiko-san nan da!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Laurens Holst, student, university of Utrecht, the Netherlands.
Website: www.grauw.nl. Backbase employee; www.backbase.com.


lholst.vcf (196 bytes) Download Attachment
smime.p7s (4K) Download Attachment
12345 ... 7