AdaptO - Adaptive Multimodal Output

António Teixeira, Carlos Pereira, Miguel Oliveira e Silva, Osvaldo Pacheco, António Neves, José Casimiro

2011

Abstract

Currently, most multimodal output mechanisms use a very centralized architecture in which the various output modalities are completely devoid of any autonomy. Our proposal, AdaptO, uses an alternative approach, proving output modalities with the capacity to make decisions, thus collaborating with the fission output mechanism towards a more effective, modular, extensible and decentralized solution. In addition, our aim is to provide the mechanisms for a highly adaptable and intelligent multimodal output system, able to adapt itself to changing environment conditions (light, noise, distance, etc.) and to its users needs, limitations and personal choices.

References

  1. Coetzee, L., Viviers, I., and Barnard, E. (2009). Model based estimation for multi-modal user interface component selection. In 20th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA 2009), pages 1-6.
  2. Dumas, B., Ingold, R., and Lalanne, D. (2009a). Benchmarking fusion engines of multimodal interactive systems. In ICMI-MLMI 7809: Proceedings of the 2009 international conference on Multimodal interfaces, pages 169-176, New York, NY, USA. ACM.
  3. Dumas, B., Lalanne, D., Guinard, D., Ingold, R., and Koenig, R. (2008). Strengths and weaknesses of software architectures for the rapid creation of tangible and multimodal interfaces. In Proceedings of 2nd international conference on Tangible and Embedded Interaction (TEI 2008), pages 47-54.
  4. Dumas, B., Lalanne, D., and Oviatt, S. (2009b). Multimodal interfaces: A survey of principles, models and frameworks. In Lalanne, D. and Kohlas, J., editors, Human Machine Interaction, volume 5440 of Lecture Notes in Computer Science, pages 3-26. Springer Berlin / Heidelberg.
  5. FIPA (2010 (accessed 7 November 2010)). dation for intelligent physical agents. http://www.fipa.org.
  6. Gordon-Salant, S. (2005). Hearing loss and aging: new research findings and clinical implications. Journal of Rehabilitation Research and Development, 42(4 Suppl 2):9-24.
  7. Heckmann, D., Schwartz, T., Brandherm, B., Schmitz, M., and von Wilamowitz-Moellendorff, M. (2005). Gumo - The General User Model Ontology. User Modeling 2005, pages 428-432.
  8. JADE (2010 (accessed 7 November 2010)). java agent development framework. http://jade.tilab.com. Jade -
  9. Karpov, A., Carbini, S., Ronzhin, A., and Viallet, J. E. (2008). Two SIMILAR Different Speech and Gestures Multimodal Interfaces. In Tzovaras, D., editor, Multimodal User Interfaces, Signals and Communication Technology, chapter 7, pages 155-184. Springer Berlin Heidelberg, Berlin, Heidelberg.
  10. Microsoft (2010 (accessed 18 October 2010)). Developing speech applications. Available: http://www.microsoft.com/speech/developers.aspx.
  11. Rousseau, C., Bellik, Y., and Vernier, F. (2005a). Multimodal output specification / simulation platform. In Proceedings of the 7th international conference on Multimodal interfaces, ICMI 7805, pages 84-91, New York, NY, USA. ACM.
  12. Rousseau, C., Bellik, Y., and Vernier, F. (2005b). WWHT: un modéle conceptuel pour la présentation multimodale d'information. In Proceedings of the 17th international conference on Francophone sur l'Interaction Homme-Machine, IHM 2005, pages 59- 66, New York, NY, USA. ACM.
  13. Rousseau, C., Bellik, Y., Vernier, F., and Bazalgette, D. (2004). Architecture framework for output multimodal systems design. In In Proceeding of OZCHI.
  14. Rousseau, C., Bellik, Y., Vernier, F., and Bazalgette, D. (2005c). Multimodal output simulation platform for real-time military systems. In Proceedings of Human Computer Interaction International (HCI International'05), Las Vegas, USA.
  15. Rousseau, C., Bellik, Y., Vernier, F., and Bazalgette, D. (2006). A framework for the intelligent multimodal presentation of information. Signal Process., 86:3696-3713.
  16. Teixeira, A., Pereira, C., Oliveira e Silva, M., and Alvarelhão, J. (2011). Output matters! adaptable multimodal output for new telerehabilitation services for the elderly. In 1st International Living Usability Lab Workshop on AAL Latest Solutions, Trends and Applications - AAL 2011 (AAL@BIOSTEC 2011). Submitted.
Download


Paper Citation


in Harvard Style

Teixeira A., Pereira C., Oliveira e Silva M., Pacheco O., Neves A. and Casimiro J. (2011). AdaptO - Adaptive Multimodal Output . In Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: PECCS, ISBN 978-989-8425-48-5, pages 91-100. DOI: 10.5220/0003372500910100


in Bibtex Style

@conference{peccs11,
author={António Teixeira and Carlos Pereira and Miguel Oliveira e Silva and Osvaldo Pacheco and António Neves and José Casimiro},
title={AdaptO - Adaptive Multimodal Output},
booktitle={Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: PECCS,},
year={2011},
pages={91-100},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003372500910100},
isbn={978-989-8425-48-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 1st International Conference on Pervasive and Embedded Computing and Communication Systems - Volume 1: PECCS,
TI - AdaptO - Adaptive Multimodal Output
SN - 978-989-8425-48-5
AU - Teixeira A.
AU - Pereira C.
AU - Oliveira e Silva M.
AU - Pacheco O.
AU - Neves A.
AU - Casimiro J.
PY - 2011
SP - 91
EP - 100
DO - 10.5220/0003372500910100