Journal on Technology & Persons with Disabilities Volume 7
http://hdl.handle.net/10211.3/210349
Collection of articles for Center on Disabilities' Journal (Vol.7)2024-03-28T14:21:38ZLaTeX is NOT Easy: Creating Accessible Scientific Documents with R Markdown
http://hdl.handle.net/10211.3/210398
LaTeX is NOT Easy: Creating Accessible Scientific Documents with R Markdown
Seo, Joo Young; McCurry, Sean
Although recent advancements in assistive technology has increasingly enabled people who are blind or visually impaired to challenge themselves to science, technology, engineering, and mathematics (STEM) subjects and careers, there is a lack of authoring tools available to independently produce scientific documents and materials which are inevitably necessary for a better communication in mainstream practices. While LaTeX, a plain-text-based document preparation system, has been considered an accessible full-fledged authoring and reference management tool, its steep learning curve and limited output type to PDF have made some blind people, who lack programming background and/or who would like to produce different accessible output formats, discouraged. This paper calls attention to the need of introducing an easy-to-write and accessible scientific document authoring tool by defining the scope of a scientific document, highlighting some issues of the conventional methods that the blind community has employed for document production, and suggesting an R Markdown system as a compelling solution. This research has developed and detailed the Accessible RMarkdown Online Writer (AROW) as a hands-on demonstration that proves its capability for highly accessible scientific document production that can be done by a blind individual in multiple formats including Word, RTF, PDF, MathML/MathJax enabled HTML, and presentations.
34th Annual Assistive Technology Conference Scientific/Research Proceedings, San Diego, 2019
2019-01-01T00:00:00ZMultimodal Visual Languages User Interface, M3UI
http://hdl.handle.net/10211.3/210399
Multimodal Visual Languages User Interface, M3UI
Boudreault, Patrick; Codick, Elizabeth; Kushalnagar, Raja S.; Vogler, Christian; Willis, Athena
Previous research has shown that deaf users spend more effort compared with their hearing peers while seeking for information in signed modality articles when compared to written modality articles. That is, if a deaf user consumes information in a signed modality, they have to invest more effort into information seeking due to lack of options and technology available for signed modality information. In other words, user effort in finding and consuming information plays a large role in successful information retrieval and consumption, with increased effort more likely to lead to failed information searches. One way to examine and reduce the disparities in deaf users' effort is to develop an improved user interface (UI) for academic articles in signed modality such as Deaf Studies Digital Journal (DSDJ). We developed and validated a multimodal visual languages user interface that makes searching for academic information in signed-modality articles easier for college-educated deaf users, and found that they can effectively scan and understand information faster by utilizing the advantages of both signed and written modalities through the M3UI interface.
34th Annual Assistive Technology Conference Scientific/Research Proceedings, San Diego, 2019
2019-01-01T00:00:00ZJournal on Technology and Persons with Disabilities, Volume 7
http://hdl.handle.net/10211.3/210401
Journal on Technology and Persons with Disabilities, Volume 7
Miesenberger, Klaus; Ruiz, Shirley; Santiago, Julia
34th Annual Assistive Technology Conference Scientific/Research Proceedings, San Diego, 2019
Table of Contents: Exploring the Use of Auditory Cues to Sonify Block-Based Programs, page 1; Stephanie Ludi, Jeffrey Wang, Kavya Chapati, Zain Khoja, Alice Nguyen | Mobile Health Technology Accessible to People with Visual Impairments, page 22; Hyung Nam Kim | A Policy Proposal to Support Self-Driving Vehicle Accessibility, page 36; Julian Brinkley, Shaundra Daily, Juan Gilbert | Suitable Size of 3D Printing Architecture Models for Tactile Exploration, page 45; Tetsuya Watanabe, Kana Sato | Evaluating Sign Language Animation through Models of Eye Movements, page 54; Abhishek Suhas Mhatre, Sedeeq Al-khazraji, Matt Huenerfauth | Tangible Cup for Elderly Social Interaction: Design TUI for & with Elderly, page 64; Way Kiat Bong, Weiqin Chen | Sensor Technology, Gamification, Haptic Interfaces in an Assistive Wearable, page 79; Nasrine Olson, Jaros?aw Urba?ski, Nils-Krister Persson, Joanna Starosta-Sztuczka, Mauricio Fuentes | A Multimodal Physics Simulation: Design and Evaluation with Diverse Learners, page 88; Brianna J. Tomlinson, Prakriti Kaini, E. Lynne Harden, Bruce N. Walker, Emily B. Moore | Global Atlas of People with Profound Intellectual and Multiple Disabilities, page 106; Meike Engelhardt, Bartosz Gluszak, Michal Kosiedowski, Torsten Krämer, Jaroslaw Urbanski | Use of mHealth Technologies by People with Vision Impairment, page 120; Nicole Thompson, John Morris, Michael Jones, Frank DeRuyter | Implementing the ATLAS Self-Driving Vehicle Voice User Interface, page 136; Julian Brinkley, Shaundra B. Daily, Juan E. Gilbert | Accessibility of Voice-Activated Agents for People Who are Deaf or Hard of Hearing, page 144; Jason Rodolitz, Evan Gambill, Brittany Willis, Christian Vogler, Raja Kushalnagar | LaTeX is NOT Easy: Creating Accessible Scientific Documents with R Markdown, page157; JooYoung Seo, Sean McCurry | Multimodal Visual Languages User Interface for Deaf Readers, page 172; Athena Willis, Elizabeth Codick, Patrick Boudreault, Christian Vogler, Raja Kushalnagar | Communication App for Children with Hearing and Developmental Difficulties, page 183; Kuniomi Shibata, Akira Hattori, Sayaka Matsumoto
2019-01-01T00:00:00ZCommunication App for Children with Hearing and Developmental Difficulties
http://hdl.handle.net/10211.3/210400
Communication App for Children with Hearing and Developmental Difficulties
Shibata, Kuniomi; Hattori, Akira; Matsumoto, Sayaka
The purpose of this article is to examine a mobile application to support children and students with hearing difficulties or/and developmental disorders in interacting, understanding, and preparing for transitions between activities in their daily, school, and social lives. We think that a communication support technique using a combination of characters, pictograms, and photographs is effective for these types of persons with disabilities. This application recognizes words spoken using a speech recognition system and arranges the pictures corresponding to the speech-recognized words in a sequence. By showing a sequence of pictures corresponding to a conversation, our application enables children with hearing difficulties and developmental disorder to understand their situations. The results of two demonstration experiments show that the application promotes mutual understanding between students with hearing difficulties and those who have developmental disorders by utilizing pictograms and photographs in combination with speech. Although it is just basic line and still has problems as platform, the study suggests that it is clearly useful and that present issues could be improved. We suggest that the application could support visualization and feedback for planning as a new way of communication for children and students with hearing difficulties and developmental disorders.
34th Annual Assistive Technology Conference Scientific/Research Proceedings, San Diego, 2019
2019-01-01T00:00:00Z