Article

Multimodal Visual Languages User Interface, M3UI

Previous research has shown that deaf users spend more effort compared with their hearing peers while seeking for information in signed modality articles when compared to written modality articles. That is, if a deaf user consumes information in a signed modality, they have to invest more effort into information seeking due to lack of options and technology available for signed modality information. In other words, user effort in finding and consuming information plays a large role in successful information retrieval and consumption, with increased effort more likely to lead to failed information searches. One way to examine and reduce the disparities in deaf users' effort is to develop an improved user interface (UI) for academic articles in signed modality such as Deaf Studies Digital Journal (DSDJ). We developed and validated a multimodal visual languages user interface that makes searching for academic information in signed-modality articles easier for college-educated deaf users, and found that they can effectively scan and understand information faster by utilizing the advantages of both signed and written modalities through the M3UI interface.

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.