Show simple item record Rathbun, Kevin en Berke, Larwan en Caulfield, Christopher en Stinson, Michael en Huenerfauth, Matt en 2017-04-26T18:56:38Z en 2017-04-26T18:56:38Z en 2017 en
dc.identifier.citation Journal on Technology and Persons with Disabilities 5: 130-140. en
dc.identifier.issn 2330-4219 en
dc.identifier.uri en
dc.description 32nd Annual International Technology and Persons with Disabilities Conference Scientific/Research Proceedings, San Diego, 2017 en
dc.description.abstract To compare methods of displaying speech-recognition confidence of automatic captions, we analyzed eye-tracking and response data from deaf or hard of hearing participants viewing videos. en
dc.format.extent 11 pages en
dc.language.iso en en
dc.publisher California State University, Northridge. en
dc.rights Copyright 2017 by the authors and California State University, Northridge en
dc.subject Deaf and Hard of Hearing en
dc.subject Emerging Assistive Technologies en
dc.subject Research and Development en
dc.title Eye Movements of Deaf and Hard of Hearing Viewers of Automatic Captions en
dc.type Article en
dc.rights.license Creative Commons Attribution-NoDerivs 4.0 Unported License. en

Files in this item


This item appears in the following Collection(s)

Show simple item record

Search DSpace

My Account

RSS Feeds