PFWG comments on Discovery & Registration of Multimodal Modality Components: State Handling

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

PFWG comments on Discovery & Registration of Multimodal Modality Components: State Handling

Michael Cooper-2
The Protocols and Formats Working Group has the following two comments
on Discovery & Registration of Multimodal Modality Components: State
Handling http://www.w3.org/TR/2015/WD-mmi-mc-discovery-20150611/

1) All figures in the doc are available as high-res images upon mouse
click, but this functionality is not available by keyboard. E.g. <img
onclick="window.open('http://upload.wikimedia.org/wikipedia/commons/3/38/Push_high.png','Push');">.
This should be made an <a> element to allow for keyboard interaction.  
There are Web users who use no mouse, but would like to see the images
in high resolution.  Also, all figures lack a long description.

2) The most interesting part for PF is the context object. However, the
doc contains no specification for context. In [1], context data is
simply not defined: "The format and meaning of this data is
application-specific." If [1] was to be revised, it would be good to
provide examples of context data which can be used to define a user's
preferences (e.g. pointer to a GPII personal preference set), a device's
characteristics, and situational parameters.  However, it is not clear
if the context data could also include information on dynamic aspects of
the interaction, e.g. a sudden increased noise level around the user.

[1] Multimodal Architecture and Interfaces.  W3C Recommendation 25
October 2012. http://www.w3.org/TR/2012/REC-mmi-arch-20121025/

Michael Cooper
PFWG staff contact