Publication of "Discovery and Registration of Multimodal Modality Components"

Previous Topic Next Topic
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

Publication of "Discovery and Registration of Multimodal Modality Components"

Deborah Dahl

I am pleased to announce that the Multimodal Interaction Working Group has published the First Public Working Draft of "Discovery & Registration of Multimodal Modality Components: State Handling" on June 11, 2015.


As people and objects move through smart environments, various capabilities in the environment will become available for interaction. This means that many important use cases will require the ability for systems to be able to dynamically configure themselves with respect to new capabilities (referred to as “Modality Components”) as they become available or unavailable. This document provides an approach to addressing these use cases.


From the W3C publication announcement ( :


“This document is addressed to developers who want either to develop Modality Components for Multimodal Applications distributed over a local network or "in the cloud". With this goal, in a multimodal system implemented according to the Multimodal Architecture Specification, the system must discover and register its Modality Components in order to preserve the overall state of the distributed elements. In this way, Modality Components can be composed with automation mechanisms in order to adapt the Application to the state of the surrounding environment.”


This version:


Latest published version:


Comments are welcome and should be sent to the group's public list, [hidden email].


Best regards,

Debbie Dahl

W3C Multimodal Interaction Working Group Chair