[D3E] Possible Changes to Mutation Events

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
42 messages Options
123
Reply | Threaded
Open this post in threaded view
|

[D3E] Possible Changes to Mutation Events

Doug Schepers-3

Hi, WebApps Fans-

The subject of mutation events arose in dinner conversation during the
WebApps F2F.  There was a short thread on it afterward, which I'm
forwarding on with permission.

Currently, the spec says, "Many single modifications of the tree can
cause multiple mutation events to be dispatched. Rather than attempt to
specify the ordering of mutation events due to every possible
modification of the tree, the ordering of these events is left to the
implementation."

Jonas proposes two substantive changes to this:

* DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired after the
mutation rather than before
* DOM operations that perform multiple sub-operations (such as moving an
element) would be dispatched (in order of operation) after all the
sub-operations are complete.

I am inclined to put this in the spec, because they have identified it
as a major pain point, and because I'm too simpleminded to find a
pragmatic case where this would cause problems (I guess if someone was
counting on getting the mutation event before it occurred and stopping
it, for instance, they would be disappointed... but I wonder how
practical that is).

I would be very interested in hard data about real-world usage (Hixie?)
or implementation status that would countermand my changing this... so,
if anyone has any feedback on this matter, please let us know.

Regards-
-Doug

-------- Original Message --------
Date: Mon, 07 Jul 2008 18:08:37 -0700
From: Jonas Sicking <[hidden email]>
To: Maciej Stachowiak <[hidden email]>
Cc: Doug Schepers <[hidden email]>, Ian Hickson <[hidden email]>
Subject: Re: Mutation Events Pain Points
Maciej Stachowiak wrote:

>
> On Jul 6, 2008, at 9:32 PM, Jonas Sicking wrote:
>
>> Actually, the only strictly needed change is to change DOMNodeRemoved
>> and DOMNodeRemovedFromDocument to say that they are fired *after* the
>> mutation rather than before.
>>
>> However it would in general be useful to define more in detail when
>> the events should fire. For example, when inserting a fragment
>> containing 5 nodes using appendChild, is one mutation event fired
>> after each individual insertion, or are 5 events fired after all nodes
>> have been inserted. The latter is very strongly desired from an
>> implementation point of view.
>>
>> I'd like for the spec to say that when a DOM operation is performed
>> that cause multiple mutations to happen, all mutation events are
>> queued up and then fired before the operation finishes. For example
>> replaceChild can cause one removal when the old child is removed, one
>> removal when the new child is removed from its old location, and one
>> insertion when the new child is inserted. All of these would fire
>> after all mutations take place, but before replaceChild returns.
>> Similarly setting .innerHTML can cause a whole host of mutations when
>> the old children are removed and the new inserted, all of these would
>> fire right before the innerHTML setter returns.
>>
>> Hope that makes sense?
>>
>> Maciej, does this match what you've been wishing for?
>
> Pretty much, yes.
>
> The biggest implementation difficulty with the current spec is that it
> requires (either clearly or arguably) that some mutation events be fired
> in the middle of a compound DOM operation. Since those event listeners
> in principle could change anything about the DOM (either nodes being
> moved, or the target part of the tree), such compound operations have to
> check all of their assumptions any time they insert a node. I think for
> many DOM operations, more than half the lines of code in the WebKit and
> Gecko implementations are to deal with mutation events, which is pretty
> unfortunate.

Yup. Another side effect is that when we do other actions that happen to
mutate the DOM, such as some features in XBL and in the editor code, we
have to keep in mind that any DOM mutation can cause any evil JS to
execute and change the world under us.

We have a queuing mechanism that in general prevents DOM manipulations
(such as inserting a <script>) from causing JS to execute during the
mutation, however the only script execution that currently isn't
queue-able is mutation events.

> Your proposal sounds like it would solve this problem. If the mutation
> events all happen at the end, all you need is some kind of queuing
> mechanism.
>
> One other thing that should be clarified - when using HTML editing
> operations, sometimes compound DOM actions occur which are not the
> result of any DOM call. For instance, it is clear that
> document.execCommand("Bold") should count as a compound DOM operation.
> However, when the user hits Cmd-B on Mac or Ctrl-B on Windows, the user
> agent may perform the equivalent action but without actually going
> through any DOM call as such. Perhaps since this is a user agent action
> the UA is free to do anything with regards to event dispatch but it
> would be nice if the spec made this clear.

Yeah, it'd be good to formulate the language such that it's allowed for
an implementation to consider a whole set of operations as a single
"operation".

/ Jonas



Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Sergey Ilinsky

Doug Schepers wrote:
> * DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired after
> the mutation rather than before
> * DOM operations that perform multiple sub-operations (such as moving
> an element) would be dispatched (in order of operation) after all the
> sub-operations are complete.
General concerns:
1) Clearly defined use cases seem to be missing from the proposal, would
it be possible to bring them all to the table?
2) The changes contradict with DOM-Level-2 Events where Mutation was
initially defined (back in the year 2000) thus creating backwards
incompatible behavior

Specific concerns:
1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
the listener for this event there is no way to know where Node was
removed from.
    (This does not apply to DOMNodeRemoved, since it has a relatedNode
property pointing to node removed)
2) If DOMNodeRemoved is fired after the mutation, event won't be capable
of bubbling

(I did not yet dig into any specific functionality that depends on the
present behavior (and potential change) of the events in subject (for
sure lots of stuff would break))

Tech Lead at Backbase,
Sergey Ilinsky/

P.S. Backbase Ajax Framework contains an implementation of DOM-Events
(Level-3) module (as well as other DOM modules) and it is dependent on
the present behavior.


Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Kartikaya Gupta-3

On Tue, 15 Jul 2008 12:39:16 +0200, Sergey Ilinsky <[hidden email]> wrote:
>
> Specific concerns:
> 1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
> the listener for this event there is no way to know where Node was
> removed from.
>     (This does not apply to DOMNodeRemoved, since it has a relatedNode
> property pointing to node removed)
> 2) If DOMNodeRemoved is fired after the mutation, event won't be capable
> of bubbling

+1. If you make these events fire after the mutation, then they won't go anywhere since the target of the event (the removed node) is free-standing. In order to listen for these events, you'd have to register a listener on every single DOM node, which seems pretty silly. Even if you changed one or both of the events to fire on the parent node, it would still be impossible to determine exactly where the removed node was removed from (i.e. what was the removed node's .nextSibling before it was removed?)

The same problem applies to batching up mutations and firing them at the end; there might be operations whose mutation events get discarded because some other mutation happens to the tree before the event is fired.

kats

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Boris Zbarsky
In reply to this post by Sergey Ilinsky

Sergey Ilinsky wrote:
> 1) Clearly defined use cases seem to be missing from the proposal, would
> it be possible to bring them all to the table?

It's more a matter of sanity of implementation than anything else, for
what it's worth.

> 2) The changes contradict with DOM-Level-2 Events where Mutation was
> initially defined (back in the year 2000) thus creating backwards
> incompatible behavior

Yes.  The whole proposal is to change the Events specification.

> 1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
> the listener for this event there is no way to know where Node was
> removed from.
>    (This does not apply to DOMNodeRemoved, since it has a relatedNode
> property pointing to node removed)

Should DOMNodeRemovedFromDocument have a relatedNode too?

> 2) If DOMNodeRemoved is fired after the mutation, event won't be capable
> of bubbling

This is a much more serious issue.  To be honest, I think the original
mutation event design was pretty weird.  I would have fired the event on
the parent, with the relatedNode being the node being removed or
added...   That would avoid this issue.  Of course that would be
inconsistent with the way DOMNodeInserted works.

-Boris

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Boris Zbarsky
In reply to this post by Kartikaya Gupta-3

Kartikaya Gupta wrote:
> The same problem applies to batching up mutations and firing them at the end; there might be operations whose mutation events get discarded because some other mutation happens to the tree before the event is fired.

I'm not sure I follow this.  What exactly is the problem here?

-Boris

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Kartikaya Gupta-3

On Tue, 15 Jul 2008 11:20:17 -0400, Boris Zbarsky <[hidden email]> wrote:
>
> Kartikaya Gupta wrote:
> > The same problem applies to batching up mutations and firing them at the end; there might be operations whose mutation events get discarded because some other mutation happens to the tree before the event is fired.
>
> I'm not sure I follow this.  What exactly is the problem here?
>

Sorry, I misread the original thread and misunderstood the scope of the change. Never mind.

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Sergey Ilinsky
In reply to this post by Boris Zbarsky

Boris Zbarsky wrote:

>> 1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
>> the listener for this event there is no way to know where Node was
>> removed from.
>>    (This does not apply to DOMNodeRemoved, since it has a relatedNode
>> property pointing to node removed)
> Should DOMNodeRemovedFromDocument have a relatedNode too?
A single relatedNode property would not be enough to satisfy developer's
curiosity on where exactly node was removed from.

> 2) If DOMNodeRemoved is fired after the mutation, event won't be
> capable of bubbling
>
> This is a much more serious issue.  To be honest, I think the original
> mutation event design was pretty weird.  I would have fired the event
> on the parent, with the relatedNode being the node being removed or
> added...   That would avoid this issue.  Of course that would be
> inconsistent with the way DOMNodeInserted works.
Although I do not share negative attitude to the design of Mutation
module, I know there are indeed use cases that are not well satisfied
with it.

The bad part about the potential to changing something in Mutation
Module is that it is almost not possible to say "a" there without having
to say "b" - the design is very much tight and consistent.

> -Boris

Sergey Ilinsky/


Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Doug Schepers-3
In reply to this post by Sergey Ilinsky

Hi, Sergey-

Thanks for your prompt and informative feedback!

Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
> Doug Schepers wrote:
>> * DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired after
>> the mutation rather than before
>> * DOM operations that perform multiple sub-operations (such as moving
>> an element) would be dispatched (in order of operation) after all the
>> sub-operations are complete.
> General concerns:
> 1) Clearly defined use cases seem to be missing from the proposal, would
> it be possible to bring them all to the table?

That's a reasonable request

I think that Jonas and Maciej described some of the use cases (from an
implementor's point of view) in their discussion:
* optimizing based on queuing of events
* reduction of code
* consistency and predictability of behavior
* interoperability on the issue of when the events fire (currently the
spec says, "Many single modifications of the tree can cause multiple
mutation events to be dispatched. Rather than attempt to specify the
ordering of mutation events due to every possible modification of the
tree, the ordering of these events is left to the implementation." [1])

It would be nice to have more use case outlined.

This knife cuts both ways, of course.  Can you cite some cases
(preferably in use today) that rely on keeping things the way they are?


> 2) The changes contradict with DOM-Level-2 Events where Mutation was
> initially defined (back in the year 2000) thus creating backwards
> incompatible behavior

Not necessarily.  It has the potential to create
backwards-incompatibility, but only if there are scripts and
implementations that rely on the specific details that are proposed for
change, i.e. that depend on the DOMNodeRemoved and
DOMNodeRemovedFromDocument events firing before removal.  If the script
only relies on knowing that a node was removed, and doesn't care much
when, then it's not actually a problem.


> Specific concerns:
> 1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
> the listener for this event there is no way to know where Node was
> removed from.
>    (This does not apply to DOMNodeRemoved, since it has a relatedNode
> property pointing to node removed)
> 2) If DOMNodeRemoved is fired after the mutation, event won't be capable
> of bubbling

I don't believe that is correct.  DOM3 Event states:

"At the beginning of the dispatch, implementations must first determine
the event object's propagation path. This is an ordered list of event
targets the object may propagate to. The last item in the list is the
event's target; the preceding items in the list are referred to as the
target's ancestors and the immediately preceding item as the target's
parent. Once determined, the propagation path cannot be changed. As an
example, in the DOM event flow event listeners might change the position
of the target node in the document while the event object is being
dispatched; such changes do not affect the propagation path." [2]

You could simply add another event listener on the target element's
parent (or other relevant ancestor) to find the tree context, if necessary.

A use case demonstrating that this wouldn't suffice would weigh in favor
of not changing it.


> (I did not yet dig into any specific functionality that depends on the
> present behavior (and potential change) of the events in subject (for
> sure lots of stuff would break))

Are you certain about that?  I don't want stuff to break (especially in
an irreparable way), but neither do I want to forestall making a change
to the spec that would be easier to implement and more predictable to
use, based on supposition of problems.  It would be very useful if you
could point out what would break, and how.


> P.S. Backbase Ajax Framework contains an implementation of DOM-Events
> (Level-3) module (as well as other DOM modules) and it is dependent on
> the present behavior.

Can you please confirm that the behavior your implementation relies on
would actually be negatively affected by the particular changes
detailed?  Please consider that this may actually have a positive impact
on your implementation's performance and maintainability.  Detailed
comments would be helpful.

I will note that other changes are also being made to the specification,
and while we are trying to be as "backwards compatible" with earlier
revisions of the spec as possible, we also place a high priority on the
spec meeting its use cases and requirements.  I'm sensitive that this
spec is too long in being finished, and that implementations reasonably
have relied on it despite its status, but it's a balancing act.

[1]
http://www.w3.org/TR/DOM-Level-3-Events/events.html#Events-eventgroupings-mutationevents
[2] http://www.w3.org/TR/DOM-Level-3-Events/events.html#Events-flow

Regards-
-Doug Schepers
W3C Team Contact, WebApps, SVG, and CDF

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Boris Zbarsky

Doug Schepers wrote:
>> 2) If DOMNodeRemoved is fired after the mutation, event won't be
>> capable of bubbling
>
> I don't believe that is correct.  DOM3 Event states:
>
> "At the beginning of the dispatch, implementations must first determine
> the event object's propagation path.

I think Sergey's point was that if dispatch happens after the removal,
and the target is the node being removed, this path will have nothing
else in it.

-Boris

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Stewart Brodie

Boris Zbarsky <[hidden email]> wrote:

>
> Doug Schepers wrote:
> >> 2) If DOMNodeRemoved is fired after the mutation, event won't be
> >> capable of bubbling
> >
> > I don't believe that is correct.  DOM3 Event states:
> >
> > "At the beginning of the dispatch, implementations must first determine
> > the event object's propagation path.
>
> I think Sergey's point was that if dispatch happens after the removal,
> and the target is the node being removed, this path will have nothing else
> in it.

It may be possible to pre-compute the list, but that would certainly
complicate the code that implements event dispatch, and as all of a sudden
we would have two events that needs to behave differently in some cases -
because, of course, you wouldn't want to modify the tree if the event was
dispatched via dispatchEvent()

Specifying the order would be the most helpful that could be done for these
two events.  For removal, I dispatch DOMNodeRemoved first and then do a
post-order traversal issuing DOMNodeRemovedFromDocument.  For insertion, I
dispatch DOMNodeInserted first and then do a pre-order traversal issuing
DOMNodeInsertedIntoDocument.  IIRC, this was consistent with Firefox and
Opera at the time and it seems sensible enough.


--
Stewart Brodie
Software Engineer
ANT Software Limited

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Laurens Holst-2
In reply to this post by Doug Schepers-3
Hi Doug,

Doug Schepers schreef:

> Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
>> Doug Schepers wrote:
>>> 1. DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired
>>> after the mutation rather than before
>>> 2. DOM operations that perform multiple sub-operations (such as
>>> moving an element) would be dispatched (in order of operation) after
>>> all the sub-operations are complete.
>> General concerns:
>> 1) Clearly defined use cases seem to be missing from the proposal,
>> would it be possible to bring them all to the table?
>
> That's a reasonable request
>
> I think that Jonas and Maciej described some of the use cases (from an
> implementor's point of view) in their discussion:
> 1. optimizing based on queuing of events
> 2. reduction of code
> 3. consistency and predictability of behavior
> 4. interoperability on the issue of when the events fire (currently
> the spec says, "Many single modifications of the tree can cause
> multiple mutation events to be dispatched. Rather than attempt to
> specify the ordering of mutation events due to every possible
> modification of the tree, the ordering of these events is left to the
> implementation." [1])
I see, so the motivation for the change request to DOMNodeRemoved is
that the second change request (throwing events at the end, after all
operations) is be impossible to do if events are not always thrown at
the end. And the motivation for throwing events at the end seems to be
for a specific kind of optimisation called ‘queuing of events’. I would
appreciate if someone could describe this optimisation.

Even ignoring the serious backwards compatibility issues that Sergey
described, I do not think this is a good idea. By defining that all
events have to be fired at the end of the operation, e.g.
Document.renameNode can never be implemented by just calling existing
DOM operations; the implementation would need to call some internal
event-less version of the methods (e.g. removeNodeNoEvent()).

It seems to me that such a definition would possibly make
implementations more complex if not impossible (in case the
implementation provides no access to events-less methods), and put more
constraints on the underlying implementation, as the implementation
would now be required to throw the events separately from the actual
operations (which I do not think would be good design).

To provide a more concrete example, a DOM (e.g. the Mozilla DOM) could
never be extended with a custom document.renameNode method that performs
the operations as described in the DOM level 3 specification, because
the events would fire too soon. Not that Backbase implements events like
this, by the way.

With regard to the DOMNodeRemoved change request, I do not think these
arguments apply. It is doubtful that moving the dispatching of the event
to another place will reduce code (2), there is a sensible reason for
why some events fire before and some fire after the action (3), and the
spec clearly defines how the event should work (4). I don’t really
understand exactly what kind of optimisation (1) entails, but I think it
would not matter for ‘event queueing’ if the event was fired before or
after the action.

To expand a little more on item 3; I do not think the ‘consistency’
argument really applies here, because there are plenty of events that
fire before the actual action takes place (e.g. all cancelable events).
Now of all 7 mutation events, only two are fired before the action, so
it is true the majority of events fires after. However they do this
because it makes sense for their specific operations, not because of
some unwritten rule that events always have to be fired after the action.

By the way, note that Backbase is also looking at this from an
implementor’s point of view :).

> It would be nice to have more use case outlined.
>
> This knife cuts both ways, of course.  Can you cite some cases
> (preferably in use today) that rely on keeping things the way they are?

In our product we have several controls that either access the
parentNode property (which would no longer be accessible) or have an
event listener for DOMNodeRemoved on their parent. Also, it is fair to
assume that there will be several customer projects expecting the
currently defined functionality.

It is not so much that with the requested changes the desired
functionality could not be achieved (although DOMNodeRemovedFromDocument
would need to provide the relatedNode property, and we would have to
attach DOMNodeRemoved event handlers to all child nodes instead of their
parent). The objection is rather that backwards compatibility with a REC
from 2000 is broken, for reasons that remains largely unclear. The
defined behaviour of DOMNodeRemoved makes sense, and is useful.

>> 2) The changes contradict with DOM-Level-2 Events where Mutation was
>> initially defined (back in the year 2000) thus creating backwards
>> incompatible behavior
>
> Not necessarily.  It has the potential to create
> backwards-incompatibility, but only if there are scripts and
> implementations that rely on the specific details that are proposed
> for change, i.e. that depend on the DOMNodeRemoved and
> DOMNodeRemovedFromDocument events firing before removal.  If the
> script only relies on knowing that a node was removed, and doesn't
> care much when, then it's not actually a problem.
The differences are pretty big, you shouldn’t downplay them as being
‘details’. I’m sure things will break. I think this is a very bad idea.

I’ve done a quick survey of our controls that use the
DOMNodeRemoved(FromDocument) events, as you can read below the majority
would break on these ‘details’.

>> Specific concerns:
>> 1) If DOMNodeRemovedFromDocument is fired after the mutation, then in
>> the listener for this event there is no way to know where Node was
>> removed from.
>>    (This does not apply to DOMNodeRemoved, since it has a relatedNode
>> property pointing to node removed)
>> 2) If DOMNodeRemoved is fired after the mutation, event won't be
>> capable of bubbling
>
> I don't believe that is correct.  DOM3 Event states:
>
> "At the beginning of the dispatch, implementations must first
> determine the event object's propagation path. This is an ordered list
> of event targets the object may propagate to. The last item in the
> list is the event's target; the preceding items in the list are
> referred to as the target's ancestors and the immediately preceding
> item as the target's parent. Once determined, the propagation path
> cannot be changed. As an example, in the DOM event flow event
> listeners might change the position of the target node in the document
> while the event object is being dispatched; such changes do not affect
> the propagation path." [2]
>
> You could simply add another event listener on the target element's
> parent (or other relevant ancestor) to find the tree context, if
> necessary.
>
> A use case demonstrating that this wouldn't suffice would weigh in
> favor of not changing it.
This section is talking about what happens when dispatchEvent is called.
In Jonas’s proposal, at this time the node would no longer be attached
to its parent node, and thus no bubbling would occur, because the
ordered list of targets would only contain a single node.

>> (I did not yet dig into any specific functionality that depends on
>> the present behavior (and potential change) of the events in subject
>> (for sure lots of stuff would break))
>
> Are you certain about that?  I don't want stuff to break (especially
> in an irreparable way), but neither do I want to forestall making a
> change to the spec that would be easier to implement and more
> predictable to use, based on supposition of problems.  It would be
> very useful if you could point out what would break, and how.

It is certain.

Some examples of what would break in our product:

- Our comboBox control would no longer be registering child option
elements getting removed from the tree.
- Our dataContextMenu control would no longer properly remove an event
listener when it is (re)moved, leading to unpredictable behaviour. E.g.
if it is re-attached, the event will occur twice, and when it is moved,
it would still respond to events on the old location.
- The options property of a control extending from formList (e.g. our
suggestBox, listBox controls) would no longer be updated properly upon
removal of list options.
- The selected property of the listBox would no longer be updated
properly upon removal of an option.

The list goes on. Out of 18 DOMNodeRemoved and
DOMNodeRemovedFromDocument event handlers in our bindings alone, at
least 12 are dependant on the current behaviour (specifically,
‘parentNode’ being available, and the DOMNodeRemoved event bubbling).

I have not evaluated our customer and consulting projects, but I can
predict there will be many more issues there.

I disagree with your suggestion that the event as it is currently
specified would be “difficult to implement” or “unpredictable to use”.
Rather, I would argue that the opposite is true, and these changes would
make implementing Events more complicated, and would make DOMNodeRemoved
behave very differently from DOMNodeInserted.

>> P.S. Backbase Ajax Framework contains an implementation of DOM-Events
>> (Level-3) module (as well as other DOM modules) and it is dependent
>> on the present behavior.
>
> Can you please confirm that the behavior your implementation relies on
> would actually be negatively affected by the particular changes
> detailed?  Please consider that this may actually have a positive
> impact on your implementation's performance and maintainability.  
> Detailed comments would be helpful.

We would definitely be negatively affected by this change; compatibility
would break, and we would have to implement ugly workarounds that are
not consistent with DOMNodeInserted.

E.g. instead of registering one DOMNodeInserted and one DOMNodeRemoved
handler to deal with insertion and removal of child nodes, we would need
to register one DOMNodeInserted handler and then a DOMNodeRemoved
handler on each child node. Also, in these handlers, we can not access
the parentNode instead having to use relatedNode, reducing the
probability that we can call generic methods to deal with the changes.

I do not see how we would benefit from this change, as we have currently
implemented DOM Events without issues, and the optimisation Mozilla
wants to make is not detailed.

> I will note that other changes are also being made to the
> specification, and while we are trying to be as "backwards compatible"
> with earlier revisions of the spec as possible, we also place a high
> priority on the spec meeting its use cases and requirements.  I'm
> sensitive that this spec is too long in being finished, and that
> implementations reasonably have relied on it despite its status, but
> it's a balancing act.  

I do not care so much about backwards compatibility with earlier
revisions of the DOM level 3 spec (although I hope there won’t be really
big changes :)), however this concerns compatibility with DOM level 2
which has been a REC since 2000. As far as I know (and if Appendix B:
Changes is not omitting anything), DOM level 3 has so far not introduced
any backwards incompatibilities with DOM level 2. Doing this would set a
very bad precedent.

If you introduce incompatible behaviour with regard to DOM level 2,
there is no way to prevent existing applications from breaking either,
because DOM does not provide a version mechanism that knows an older
version is expected and could provide backwards compatible behaviour.
And either way, I think having to branch code (or worse, providing
different implementations) based on version is undesirable.

I think you should be very, very reluctant to break backwards
compatibility with an 8-year old REC. The DOM specifications are a core
part of XML technologies, and if those standardised core technologies
can not be trusted to be compatible with previous Recommendations,
requiring application-level changes, maybe XML technologies aren’t as
reliable as we all thought.

~Grauw

--
Note: New email address! Please update your address book.

~~ Ushiko-san! Kimi wa doushite, Ushiko-san nan da!! ~~
Laurens Holst, student, university of Utrecht, the Netherlands
Website: www.grauw.nl. Backbase employee; www.backbase.com


lholst.vcf (184 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Doug Schepers-3

Hi, Grauw-

First, I want to note that I'm just collecting data here... I can see
both sides of the argument on this particular issue, so as editor, I'm
happy to specify this however it seems most beneficial to authors and
implementors (in that order).  I appreciate all the good feedback
everyone is supplying.

I realize that by relaying and summarizing Jonas' and Maciej's request,
I may seem to have already aligned myself with a "side" here, but any
defense I'm making is as an advocate for the authors and for the devil
(not necessarily in that order).

So, I will leave it up to Jonas and Maciej to present counter-arguments.

Just a couple comments inline...

Laurens Holst wrote (on 7/16/08 9:36 AM):
>
> To expand a little more on item 3; I do not think the ‘consistency’
> argument really applies here, because there are plenty of events that
> fire before the actual action takes place (e.g. all cancelable events).
> Now of all 7 mutation events, only two are fired before the action, so
> it is true the majority of events fires after. However they do this
> because it makes sense for their specific operations, not because of
> some unwritten rule that events always have to be fired after the action.

The consistency and predictability I was referring to was not alignment
with the other mutation events firing after the operation, but rather with:

a) The bit in DOM3 Events that says, "Rather than attempt to specify the
ordering of mutation events due to every possible modification of the
tree, the ordering of these events is left to the implementation."

b) predictability that operations will not end up with surprising
results regarding the tree structure as a result of complex operations.

> By the way, note that Backbase is also looking at this from an
> implementor’s point of view :).

Yes, it seems Backbase would be considered both an implementor and an
author (that is, you implement the functionality of the spec, and
provide content on top of that functionality).  It would be interesting
so see an examination of the different needs and markets for desktop
browser vendors, mobile browser vendors, and script library vendors,
with regards to reliance on underlying platforms and ability to push
changes out into deployed code.

I took a look through the Backbase demo... very impressive stuff.  Much
of it was even accessible and navigable by keyboard control, which was
even nicer.


>> Not necessarily.  It has the potential to create
>> backwards-incompatibility, but only if there are scripts and
>> implementations that rely on the specific details that are proposed
>> for change, i.e. that depend on the DOMNodeRemoved and
>> DOMNodeRemovedFromDocument events firing before removal.  If the
>> script only relies on knowing that a node was removed, and doesn't
>> care much when, then it's not actually a problem.
>
> The differences are pretty big, you shouldn’t downplay them as being
> ‘details’. I’m sure things will break. I think this is a very bad idea.

I didn't mean "details" in a pejorative sense, but rather as a contrast
to a more drastic change, such as removing mutation events altogether.
The point was that certain aspects of the events could possibly be
changed without affecting existing applications or script code; you have
evidence that this is not the case here, but without knowing the details
of that, it was hard to judge.  The devil is in the details, after
all... and specifications are, let's face is, all about details.


> It is certain.
>
> Some examples of what would break in our product:

Okay, thanks for the details, especially regarding what remedial steps
you'd have to take.


> I disagree with your suggestion that the event as it is currently
> specified would be “difficult to implement” or “unpredictable to use”.

I was really trying to characterize Jonas' stance, and may not have been
successful at it.  I invite him to provide more details.


> I do not care so much about backwards compatibility with earlier
> revisions of the DOM level 3 spec (although I hope there won’t be really
> big changes :)), however this concerns compatibility with DOM level 2
> which has been a REC since 2000. As far as I know (and if Appendix B:
> Changes is not omitting anything), DOM level 3 has so far not introduced
> any backwards incompatibilities with DOM level 2. Doing this would set a
> very bad precedent.

None of the other changes currently being considered would affect
compatibility with DOM2... most of them have to do with Key Identifiers
and IMEs, new events (e.g. wheel, mousewheel, dblclick), event order,
and default actions (off the top of my head).  I'm glad that you are
paying attention, and look forward to your feedback on these other
issues, too.


> I think you should be very, very reluctant to break backwards
> compatibility with an 8-year old REC. The DOM specifications are a core
> part of XML technologies, and if those standardised core technologies
> can not be trusted to be compatible with previous Recommendations,
> requiring application-level changes, maybe XML technologies aren’t as
> reliable as we all thought.

Well, there are those who would argue that anyone, but I'm not one of
them. :)

Your point about compatibility with DOM2 is very well made and taken.

Regards-
-Doug Schepers
W3C Team Contact, WebApps, SVG, and CDF

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Boris Zbarsky
In reply to this post by Laurens Holst-2

Laurens Holst wrote:
> I see, so the motivation for the change request to DOMNodeRemoved is
> that the second change request (throwing events at the end, after all
> operations) is be impossible to do if events are not always thrown at
> the end. And the motivation for throwing events at the end seems to be
> for a specific kind of optimisation called ‘queuing of events’.

It's not quite an optimization.  At heart, it's a security requirement:
untrusted script must not be run while data structures are in an
inconsistent state.

It can be worked around by adding a lot more code to make sure things
are in a consistent state at all times during, say, a DocumentFragment
insertion, which is what some implementations have been doing.  But this
greatly increases complexity and reduces performance of cases that are
commonly used on the Web (innerHTML).

> By defining that all
> events have to be fired at the end of the operation, e.g.
> Document.renameNode can never be implemented by just calling existing
> DOM operations

This is a serious concern, yes.  On the other hand, that's already the
case in a lot of situations with the DOM because of the way ranges work.
  You end up having to call notification-less versions of "basic" methods...

> With regard to the DOMNodeRemoved change request, I do not think these
> arguments apply. It is doubtful that moving the dispatching of the event
> to another place will reduce code

I can guarantee you that it would reduce code and improve security in Gecko.

> there is a sensible reason for why some events fire before and some fire after the action

Agreed, but in practice this causes some problems.  If it's possible to
address the problems while still addressing the mutation event use
cases, that would be great.

> In our product we have several controls that either access the
> parentNode property (which would no longer be accessible)

I should note that being able to use parentNode like this assumes that
you are the only one listening for the mutation event.  That might be a
good assumption in your project, of course...

> or have an
> event listener for DOMNodeRemoved on their parent.

Whatever we do, this case certainly needs to work, in my opinion.
Firing DOMNodeRemoved on the node that actually got removed is not as
important as this.  Again, in my opinion.

-Boris




Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Maciej Stachowiak
In reply to this post by Laurens Holst-2


On Jul 16, 2008, at 6:36 AM, Laurens Holst wrote:

> Hi Doug,
>
> Doug Schepers schreef:
>> Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
>>> Doug Schepers wrote:
>>>> 1. DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired  
>>>> after the mutation rather than before
>>>> 2. DOM operations that perform multiple sub-operations (such as  
>>>> moving an element) would be dispatched (in order of operation)  
>>>> after all the sub-operations are complete.
>>> General concerns:
>>> 1) Clearly defined use cases seem to be missing from the proposal,  
>>> would it be possible to bring them all to the table?
>>
>> That's a reasonable request
>>
>> I think that Jonas and Maciej described some of the use cases (from  
>> an implementor's point of view) in their discussion:
>> 1. optimizing based on queuing of events
>> 2. reduction of code
>> 3. consistency and predictability of behavior
>> 4. interoperability on the issue of when the events fire (currently  
>> the spec says, "Many single modifications of the tree can cause  
>> multiple mutation events to be dispatched. Rather than attempt to  
>> specify the ordering of mutation events due to every possible  
>> modification of the tree, the ordering of these events is left to  
>> the implementation." [1])
>
> I see, so the motivation for the change request to DOMNodeRemoved is  
> that the second change request (throwing events at the end, after  
> all operations) is be impossible to do if events are not always  
> thrown at the end. And the motivation for throwing events at the end  
> seems to be for a specific kind of optimisation called ‘queuing of  
> events’. I would appreciate if someone could describe this  
> optimisation.

The purpose is not optimization, but rather reducing code complexity  
and risk. DOM mutation events can make arbitrary changes to the DOM,  
including ones that may invalidate the rest of the operation. Let's  
say you call parent.replaceChild(old, new). If the DOMNodeRemoved  
notification is fired before the removal of old, or even between the  
removal and the insertion, it might remove old from parent and moved  
elsewhere in the document. The remove notification for new (if it  
already had a parent) could also move old, or new, or parent. There's  
no particularly valid reason to do this, but Web-facing  
implementations must be robust in the face of broken or malicious  
code. This means that at every stage of a multistep operation, the  
implementation has to recheck its assumptions. In WebKit and Gecko,  
the code for many of the basic DOM operations often is more than 50%  
code to dispatch mutation events, re-check assumptions and abort if  
needed. Dispatching mutation events at the end of a compound operation  
doesn't have this problem - there is no need to re-check assumptions  
because the operation is complete.

> Even ignoring the serious backwards compatibility issues that Sergey  
> described, I do not think this is a good idea. By defining that all  
> events have to be fired at the end of the operation, e.g.  
> Document.renameNode can never be implemented by just calling  
> existing DOM operations; the implementation would need to call some  
> internal event-less version of the methods (e.g. removeNodeNoEvent()).

First of all, this is not a big deal for implementations. Second, it  
seems to me this is true whether removeNode fires the event first or  
last.


> It seems to me that such a definition would possibly make  
> implementations more complex if not impossible (in case the  
> implementation provides no access to events-less methods), and put  
> more constraints on the underlying implementation, as the  
> implementation would now be required to throw the events separately  
> from the actual operations (which I do not think would be good  
> design).

No, it would make implementations much simpler by removing all the  
code that handles the very unlikely case of the mutation event  
listener modifying the DOM in a way that invalidates the operation. I  
know for sure this is the case for WebKit's DOM implementation, and  
Mozilla folks have told me the same is true for Gecko.

[...snip...]

> I do not care so much about backwards compatibility with earlier  
> revisions of the DOM level 3 spec (although I hope there won’t be  
> really big changes :)), however this concerns compatibility with DOM  
> level 2 which has been a REC since 2000. As far as I know (and if  
> Appendix B: Changes is not omitting anything), DOM level 3 has so  
> far not introduced any backwards incompatibilities with DOM level 2.  
> Doing this would set a very bad precedent.
>
> If you introduce incompatible behaviour with regard to DOM level 2,  
> there is no way to prevent existing applications from breaking  
> either, because DOM does not provide a version mechanism that knows  
> an older version is expected and could provide backwards compatible  
> behaviour. And either way, I think having to branch code (or worse,  
> providing different implementations) based on version is undesirable.
>
> I think you should be very, very reluctant to break backwards  
> compatibility with an 8-year old REC. The DOM specifications are a  
> core part of XML technologies, and if those standardised core  
> technologies can not be trusted to be compatible with previous  
> Recommendations, requiring application-level changes, maybe XML  
> technologies aren’t as reliable as we all thought.

Unfortunately back in 2000 we did not have the kind of robust feedback  
cycle between standards orgs and implementors that we do today, so it  
was not realized at the time that the design of DOM mutation events  
required so much implementation complexity.

Regards,
Maciej


Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Stewart Brodie

Maciej Stachowiak <[hidden email]> wrote:

> The purpose is not optimization, but rather reducing code complexity  
> and risk. DOM mutation events can make arbitrary changes to the DOM,  
> including ones that may invalidate the rest of the operation. Let's  
> say you call parent.replaceChild(old, new). If the DOMNodeRemoved  
> notification is fired before the removal of old, or even between the  
> removal and the insertion, it might remove old from parent and moved  
> elsewhere in the document. The remove notification for new (if it  
> already had a parent) could also move old, or new, or parent. There's  
> no particularly valid reason to do this, but Web-facing  
> implementations must be robust in the face of broken or malicious  
> code. This means that at every stage of a multistep operation, the  
> implementation has to recheck its assumptions. In WebKit and Gecko,  
> the code for many of the basic DOM operations often is more than 50%  
> code to dispatch mutation events, re-check assumptions and abort if  
> needed. Dispatching mutation events at the end of a compound operation  
> doesn't have this problem - there is no need to re-check assumptions  
> because the operation is complete.

I agree with all that, but it's not the whole story, because making this
change has potentially severe consequences for memory usage if you start
moving large subtrees around within a document.  Just how long is the event
queue allowed to get?

If you're going to postpone the event dispatch, when do you build the
listener list for each event?  Presumably you must built it at the time at
which you queue the event?  If so, then that's a whole lot more work that's
now necessary in removeEventListener[NS].


> > It seems to me that such a definition would possibly make  
> > implementations more complex if not impossible (in case the  
> > implementation provides no access to events-less methods), and put  
> > more constraints on the underlying implementation, as the  
> > implementation would now be required to throw the events separately  
> > from the actual operations (which I do not think would be good  
> > design).
>
> No, it would make implementations much simpler by removing all the  
> code that handles the very unlikely case of the mutation event  
> listener modifying the DOM in a way that invalidates the operation. I  
> know for sure this is the case for WebKit's DOM implementation, and  
> Mozilla folks have told me the same is true for Gecko.

It complicates our implementation too, but at least it's relatively bounded
and simple to revalidate compared to the unbounded memory usage of queueing
everything up.


--
Stewart Brodie

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Jonas Sicking-2
In reply to this post by Laurens Holst-2

Laurens Holst wrote:

> Hi Doug,
>
> Doug Schepers schreef:
>> Sergey Ilinsky wrote (on 7/15/08 6:39 AM):
>>> Doug Schepers wrote:
>>>> 1. DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired
>>>> after the mutation rather than before
>>>> 2. DOM operations that perform multiple sub-operations (such as
>>>> moving an element) would be dispatched (in order of operation) after
>>>> all the sub-operations are complete.

Note that the second request here is not a request for a change, but
rather for a clarification. There is nothing in the spec today that
state *when* the events that fire after the mutation are supposed to
fire. In theory you could queue up all such events and just check once
an hour if anything needs firing and do so at that point.

The first request is a request for an incompatible change though. As
others have already noted.

>>> General concerns:
>>> 1) Clearly defined use cases seem to be missing from the proposal,
>>> would it be possible to bring them all to the table?
>>
>> That's a reasonable request
>>
>> I think that Jonas and Maciej described some of the use cases (from an
>> implementor's point of view) in their discussion:
>> 1. optimizing based on queuing of events
>> 2. reduction of code
>> 3. consistency and predictability of behavior
>> 4. interoperability on the issue of when the events fire (currently
>> the spec says, "Many single modifications of the tree can cause
>> multiple mutation events to be dispatched. Rather than attempt to
>> specify the ordering of mutation events due to every possible
>> modification of the tree, the ordering of these events is left to the
>> implementation." [1])
>
> I see, so the motivation for the change request to DOMNodeRemoved is
> that the second change request (throwing events at the end, after all
> operations) is be impossible to do if events are not always thrown at
> the end. And the motivation for throwing events at the end seems to be
> for a specific kind of optimisation called ‘queuing of events’. I would
> appreciate if someone could describe this optimisation.

Here is the problem we are struggling with is that the current design is
very complex to implement. Any code we have that somewhere inside the
code requires the DOM to be mutated mean that after the mutation we
recheck all invariants after each mutation. This is because during the
mutation a mutation event could have fired which has completely changed
the world under us. These changes can be as severe as totally changing
the structure of the whole DOM, navigating away to another webpage
altogether and/or closing the current window.

To take an example:
Implementing XBL requires that when an attribute is change on one
element, a set of other attribute mutations might need to happen. Today,
we need to after each such mutation need to check that no other mutation
has happened that we need to react to. For the next attribute we need to
check if the element still is in the tree, if the XBL binding is still
attached, if the original attribute that was set is still set, and to
the same value, if there's something else that has changed about the
binding that makes the attribute-setting no longer needed, if the user
is still on the same page or if we've started tearing down the DOM, etc.

In languages like C++ this is even worse since its possible that all the
pointers we use in the algorithm could now be pointing to deleted
objects. Dereferencing such pointers can lead to crashes which can be
exploitable.

Especially the last part is something that is a big concern. It is sort
of ok if doing really weird things during a mutation event results in
weird results. It's worse, but still mostly ok if it leads to crashes.
Authors are unlikely to be bothered much if totally revamping the DOM
during a mutation event leads to any of this. They can just do whatever
they need to do later, if they need to do it at all.  But if it leads to
calling functions on deleted objects, then the result is exploitable
crashes with all the stolen data and botnets that come with that.

 From a security point of view implementing mutation events just does
not make sense to us. It has led to more security problems than is
justifiable given the value of the feature.

For us the question at this point is just as much "should we keep
mutation events at all" as it is "are we ok with changing the
implementation to be incompatible with the spec".

> Even ignoring the serious backwards compatibility issues that Sergey
> described, I do not think this is a good idea. By defining that all
> events have to be fired at the end of the operation, e.g.
> Document.renameNode can never be implemented by just calling existing
> DOM operations; the implementation would need to call some internal
> event-less version of the methods (e.g. removeNodeNoEvent()).

Actually, it's the contrary. Implementing document.renameNode right now
is a royal pain since it involves multiple mutations to DOM nodes. After
each mutation we need to check that everything is in a state where we
can continue, or if we need to update all local variables since the
world changed around us. It is currently easier to implement renameNode
using other methods that don't fire mutation events, and then fire any
needed events manually afterwards.

If we changed the spec as described, the implementation can just keep a
flag stating "i'm inside a DOM operation, queue events instead of firing
them", and then call a generic "clear flag and fire any queued events at
the end" function at the end.

> It seems to me that such a definition would possibly make
> implementations more complex if not impossible (in case the
> implementation provides no access to events-less methods), and put more
> constraints on the underlying implementation, as the implementation
> would now be required to throw the events separately from the actual
> operations (which I do not think would be good design).

As someone that is working a DOM implementation that is used for the
general web on a daily basis, my findings is the opposite.

> To provide a more concrete example, a DOM (e.g. the Mozilla DOM) could
> never be extended with a custom document.renameNode method that performs
> the operations as described in the DOM level 3 specification, because
> the events would fire too soon. Not that Backbase implements events like
> this, by the way.

I do agree that it would be impossible to implement certain DOM features
in javascript. However I've always found this a pretty small price to
pay. The DOM should mostly be implemented by DOM implementations, not by
javascript libraries on top of the DOM implementation. Javascript
libraries can implement whatever APIs they want, the need for
interoperability between them isn't as great.

I do agree that it's neat when javascript libraries can fix deficiencies
when implementations are lacking, but I consider interoperability
between implementations more important.

>> It would be nice to have more use case outlined.
>>
>> This knife cuts both ways, of course.  Can you cite some cases
>> (preferably in use today) that rely on keeping things the way they are?
>
> In our product we have several controls that either access the
> parentNode property (which would no longer be accessible) or have an
> event listener for DOMNodeRemoved on their parent. Also, it is fair to
> assume that there will be several customer projects expecting the
> currently defined functionality.

I think we can provide the same set amount of data through new properties.

For example we could fire a new event on the old parent of the removed
node. This event could have the .relatedNode be the removed child, and a
new .relatedIndex property hold the index the node was removed from.

> It is not so much that with the requested changes the desired
> functionality could not be achieved (although DOMNodeRemovedFromDocument
> would need to provide the relatedNode property, and we would have to
> attach DOMNodeRemoved event handlers to all child nodes instead of their
> parent). The objection is rather that backwards compatibility with a REC
> from 2000 is broken, for reasons that remains largely unclear. The
> defined behaviour of DOMNodeRemoved makes sense, and is useful.

I hope I made the reasons more clear.

/ Jonas


Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Maciej Stachowiak
In reply to this post by Stewart Brodie


On Jul 16, 2008, at 2:03 PM, Stewart Brodie wrote:

> Maciej Stachowiak <[hidden email]> wrote:
>
>> The purpose is not optimization, but rather reducing code complexity
>> and risk. DOM mutation events can make arbitrary changes to the DOM,
>> including ones that may invalidate the rest of the operation. Let's
>> say you call parent.replaceChild(old, new). If the DOMNodeRemoved
>> notification is fired before the removal of old, or even between the
>> removal and the insertion, it might remove old from parent and moved
>> elsewhere in the document. The remove notification for new (if it
>> already had a parent) could also move old, or new, or parent. There's
>> no particularly valid reason to do this, but Web-facing
>> implementations must be robust in the face of broken or malicious
>> code. This means that at every stage of a multistep operation, the
>> implementation has to recheck its assumptions. In WebKit and Gecko,
>> the code for many of the basic DOM operations often is more than 50%
>> code to dispatch mutation events, re-check assumptions and abort if
>> needed. Dispatching mutation events at the end of a compound  
>> operation
>> doesn't have this problem - there is no need to re-check assumptions
>> because the operation is complete.
>
> I agree with all that, but it's not the whole story, because making  
> this
> change has potentially severe consequences for memory usage if you  
> start
> moving large subtrees around within a document.  Just how long is  
> the event
> queue allowed to get?

It will only grow without bound if every mutation event handler in  
turn modifies the DOM itself, or if a compound DOM operation is  
unbounded.

> If you're going to postpone the event dispatch, when do you build the
> listener list for each event?  Presumably you must built it at the  
> time at
> which you queue the event?  If so, then that's a whole lot more work  
> that's
> now necessary in removeEventListener[NS].

No, the event is dispatched when it comes off the queue, therefore the  
listeners in effect at that time should be the ones that apply, same  
as for any other event.

>> No, it would make implementations much simpler by removing all the
>> code that handles the very unlikely case of the mutation event
>> listener modifying the DOM in a way that invalidates the operation. I
>> know for sure this is the case for WebKit's DOM implementation, and
>> Mozilla folks have told me the same is true for Gecko.
>
> It complicates our implementation too, but at least it's relatively  
> bounded
> and simple to revalidate compared to the unbounded memory usage of  
> queueing
> everything up.

It's certainly not simple (unless both WebKit and Gecko developers  
missed something). The memory needed for the queue will be 0 in most  
cases since the vast majority of pages register no mutation event  
listeners. Even if they do, the queue size is likely to be trivial in  
all the circumstances I can imagine and to be dwarfed by the size of  
the DOM itself.

Regards,
Maciej



Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Stewart Brodie

Maciej Stachowiak <[hidden email]> wrote:

>
> On Jul 16, 2008, at 2:03 PM, Stewart Brodie wrote:
>
> > Maciej Stachowiak <[hidden email]> wrote:
> >
> >> The purpose is not optimization, but rather reducing code complexity
> >> and risk. DOM mutation events can make arbitrary changes to the DOM,
> >> including ones that may invalidate the rest of the operation. Let's
> >> say you call parent.replaceChild(old, new). If the DOMNodeRemoved
> >> notification is fired before the removal of old, or even between the
> >> removal and the insertion, it might remove old from parent and moved
> >> elsewhere in the document. The remove notification for new (if it
> >> already had a parent) could also move old, or new, or parent. There's
> >> no particularly valid reason to do this, but Web-facing
> >> implementations must be robust in the face of broken or malicious
> >> code. This means that at every stage of a multistep operation, the
> >> implementation has to recheck its assumptions. In WebKit and Gecko,
> >> the code for many of the basic DOM operations often is more than 50%
> >> code to dispatch mutation events, re-check assumptions and abort if
> >> needed. Dispatching mutation events at the end of a compound  
> >> operation
> >> doesn't have this problem - there is no need to re-check assumptions
> >> because the operation is complete.
> >
> > I agree with all that, but it's not the whole story, because making this
> > change has potentially severe consequences for memory usage if you start
> > moving large subtrees around within a document.  Just how long is the
> > event queue allowed to get?
>
> It will only grow without bound if every mutation event handler in turn
> modifies the DOM itself, or if a compound DOM operation is unbounded.

Unbounded queueing is a real concern.  Having said that, the current
situation is that we have unbounded recursion, which is equally
unacceptable, although the recursion depth is at least defined by the number
of recursive calls made by the event listeners, whereas the queue length is
dependent on the number of nodes in the affected subtree, which is likely to
be substantially greater.

Maybe we just have to hope nobody's listening for the events at all so they
can be optimised away completely.


--
Stewart Brodie

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Maciej Stachowiak


On Jul 16, 2008, at 5:00 PM, Stewart Brodie wrote:

> Maciej Stachowiak <[hidden email]> wrote:
>
>>
>> On Jul 16, 2008, at 2:03 PM, Stewart Brodie wrote:
>>
>>> Maciej Stachowiak <[hidden email]> wrote:
>>>
>>>> The purpose is not optimization, but rather reducing code  
>>>> complexity
>>>> and risk. DOM mutation events can make arbitrary changes to the  
>>>> DOM,
>>>> including ones that may invalidate the rest of the operation. Let's
>>>> say you call parent.replaceChild(old, new). If the DOMNodeRemoved
>>>> notification is fired before the removal of old, or even between  
>>>> the
>>>> removal and the insertion, it might remove old from parent and  
>>>> moved
>>>> elsewhere in the document. The remove notification for new (if it
>>>> already had a parent) could also move old, or new, or parent.  
>>>> There's
>>>> no particularly valid reason to do this, but Web-facing
>>>> implementations must be robust in the face of broken or malicious
>>>> code. This means that at every stage of a multistep operation, the
>>>> implementation has to recheck its assumptions. In WebKit and Gecko,
>>>> the code for many of the basic DOM operations often is more than  
>>>> 50%
>>>> code to dispatch mutation events, re-check assumptions and abort if
>>>> needed. Dispatching mutation events at the end of a compound
>>>> operation
>>>> doesn't have this problem - there is no need to re-check  
>>>> assumptions
>>>> because the operation is complete.
>>>
>>> I agree with all that, but it's not the whole story, because  
>>> making this
>>> change has potentially severe consequences for memory usage if you  
>>> start
>>> moving large subtrees around within a document.  Just how long is  
>>> the
>>> event queue allowed to get?
>>
>> It will only grow without bound if every mutation event handler in  
>> turn
>> modifies the DOM itself, or if a compound DOM operation is unbounded.
>
> Unbounded queueing is a real concern.  Having said that, the current
> situation is that we have unbounded recursion, which is equally
> unacceptable, although the recursion depth is at least defined by  
> the number
> of recursive calls made by the event listeners, whereas the queue  
> length is
> dependent on the number of nodes in the affected subtree, which is  
> likely to
> be substantially greater.

Can you describe a plausible scenario where growth of the mutation  
event queue would be a significant issue? You've said repeatedly you  
are worried about it but have not given an example of when this might  
happen. Are you talking about the kind of programmer error that  
results in an infinite loop?

> Maybe we just have to hope nobody's listening for the events at all  
> so they
> can be optimised away completely.

That's certainly the common case.

Reply | Threaded
Open this post in threaded view
|

Re: [D3E] Possible Changes to Mutation Events

Kartikaya Gupta-3
In reply to this post by Jonas Sicking-2

On Wed, 16 Jul 2008 16:18:39 -0500, Jonas Sicking <[hidden email]> wrote:

>
> Laurens Holst wrote:
> >
> > I see, so the motivation for the change request to DOMNodeRemoved is
> > that the second change request (throwing events at the end, after all
> > operations) is be impossible to do if events are not always thrown at
> > the end. And the motivation for throwing events at the end seems to be
> > for a specific kind of optimisation called ‘queuing of events’. I would
> > appreciate if someone could describe this optimisation.
>
> Here is the problem we are struggling with is that the current design is
> very complex to implement. Any code we have that somewhere inside the
> code requires the DOM to be mutated mean that after the mutation we
> recheck all invariants after each mutation. This is because during the
> mutation a mutation event could have fired which has completely changed
> the world under us. These changes can be as severe as totally changing
> the structure of the whole DOM, navigating away to another webpage
> altogether and/or closing the current window.
>

I understand your concerns, and while your proposed solution would solve your problem, it pushes this exact same burden onto web authors. Say we go ahead change the spec so that all the events are queued up and fired at the end of a compound operation. Now listeners that receive these events cannot be sure the DOM hasn't changed out from under *them* as part of a compound operation. Consider the following example:

<html><body>
 <style>.lastLink { color: red }</style>
 <a href="http://example.org">i want to be the last link</a>
 <div id="emptyMe">
  <a href="http://example.org">example two</a>
  <a class="lastLink" href="http://example.org">example three</a>
 </div>
 <script type="text/javascript">
    var numLinks = document.links.length;
    document.addEventListener( "DOMNodeRemovedFromDocument", function(e) {
        if (e.target.nodeName == 'A') { // or e.relatedNode.nodeName as the case may be
            if (--numLinks > 0) {
                document.links[ numLinks - 1 ].className = 'lastLink';
            }
        }
    }, true );
 </script>
</body></html>

If you did something like document.getElementById('emptyMe').innerHTML = '' and considered it a compound operation, the code above, which works with current implementations, will die because numLinks will be out of sync with document.links.length, and the array indexing will fail. To avoid this scenario, the code has to be rewritten to re-query document.links.length instead of assuming numLinks will always be valid. This is exactly the same problem you're currently having - the DOM is changing under the code unexpectedly, forcing it to recheck assumptions.

You could argue that this example is contrived (and it is), but I think it still illustrates the point. The current interleaving of mutations and events is bad for (some) implementations and good for web authors. Your proposed interleaving is good for (some) implementations and bad for web authors. In both cases it's for the same reason - being able to make assumptions simplifies code, so the side that gets to make those assumptions is better off, and the other side has to revalidate their assumptions.

I also consider this entire problem to be more of an implementation detail than anything else. The current spec can pose a security risk if not properly implemented, but that's true of any spec. The security risk identified is only a problem on C/C++ implementations. Speaking as a Java implementor, I prefer the spec as it stands now. It is far easier to simpler for me to assume listeners don't mutate the DOM, and then catch any exceptions that get thrown when that assumption is violated. With the proposed changes, I would have to implement some complicated queuing solution that increases memory requirements dramatically.

Cheers,
kats

123