English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Recognition of Hearing Needs From Body and Eye Movements to Improve Hearing Instruments

MPS-Authors
/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Tessendorf, B., Bulling, A., Roggen, D., Stiefmeier, T., Feilner, M., Derleth, P., et al. (2011). Recognition of Hearing Needs From Body and Eye Movements to Improve Hearing Instruments. In K. Lyons, J. Hightower, & E. M. Huang (Eds.), Pervasive Computing (pp. 314-331). Berlin: Springer.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0017-AFF2-3
Abstract
Hearing instruments (HIs) have emerged as true pervasive computers as they continuously adapt the hearing program to the user\textquoterights context. However, current HIs are not able to distinguish different hearing needs in the same acoustic environment. In this work, we explore how information derived from body and eye movements can be used to improve the recognition of such hearing needs. We conduct an experiment to provoke an acoustic environment in which different hearing needs arise: active conversation and working while colleagues are having a conversation in a noisy office environment. We record body movements on nine body locations, eye movements using electrooculography (EOG), and sound using commercial HIs for eleven participants. Using a support vector machine (SVM) classifier and person-independent training we improve the accuracy of 77% based on sound to an accuracy of 92% using body movements. With a view to a future implementation into a HI we then perform a detailed analysis of the sensors attached to the head. We achieve the best accuracy of 86% using eye movements compared to 84% for head movements. Our work demonstrates the potential of additional sensor modalities for future HIs and motivates to investigate the wider applicability of this approach on further hearing situations and needs.