Show simple item record

dc.contributor.author Rathbun, Kevin en
dc.contributor.author Berke, Larwan en
dc.contributor.author Caulfield, Christopher en
dc.contributor.author Stinson, Michael en
dc.contributor.author Huenerfauth, Matt en
dc.date.accessioned 2017-04-26T18:56:38Z en
dc.date.available 2017-04-26T18:56:38Z en
dc.date.issued 2017 en
dc.identifier.citation Journal on Technology and Persons with Disabilities 5: 130-140. en
dc.identifier.issn 2330-4219 en
dc.identifier.uri http://hdl.handle.net/10211.3/190208 en
dc.description 32nd Annual International Technology and Persons with Disabilities Conference Scientific/Research Proceedings, San Diego, 2017 en
dc.description.abstract To compare methods of displaying speech-recognition confidence of automatic captions, we analyzed eye-tracking and response data from deaf or hard of hearing participants viewing videos. en
dc.format.extent 11 pages en
dc.language.iso en en
dc.publisher California State University, Northridge. en
dc.rights Copyright 2017 by the authors and California State University, Northridge en
dc.subject Deaf and Hard of Hearing en
dc.subject Emerging Assistive Technologies en
dc.subject Research and Development en
dc.title Eye Movements of Deaf and Hard of Hearing Viewers of Automatic Captions en
dc.type Article en
dc.rights.license Creative Commons Attribution-NoDerivs 4.0 Unported License. en


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Search DSpace


My Account

RSS Feeds