CSUN Centers and Institutes
http://hdl.handle.net/10211.3/121123
2024-03-28T19:07:02ZASL Consent in the Digital Informed Consent Process
http://hdl.handle.net/10211.3/225180
ASL Consent in the Digital Informed Consent Process
Kosa, Ben S.; Minakawa, Ai; Boudreault, Patrick; Vogler, Christian; Kushalnagar, Poorna; Kushalnagar, Raja
There is an estimated 500,000 people in the U.S. who are deaf and who use ASL and live in the U.S. Compared to the general population, deaf people are at greater risk of having chronic health problems and experience significant health disparities and inequities (Sanfacon, Leffers, Miller, Stabbe, DeWindt, Wagner, & Kushalnagar, 2020; Kushalnagar, Reesman, Holcomb, & Ryan, 2019; Kushalnagar & Miller, 2019). Much of the disparities are explained by the barriers in the environment, such as the unavailability of materials in ASL and lack of healthcare professionals who know how to provide deaf patient-centered care. Intersecting social determinants of health (e.g., intrinsic - low education; and extrinsic - barrier to healthcare services) create a mutually constituted vulnerability for healthdisparities when a person is deaf (Kushalnagar & Miller, 2019; Lesch, Brucher, Chapple, R., & Chapple, K., 2019; Smith & Chin, 2012). Moreover, the longstanding history of inequitable access to language and education, and a lack of printed information and materials, leave people who are deaf and use ASL unaware of opportunities to participate in cutting-edge research/clinical trials. An unintended consequence, therefore, is that PIs neglect to include people who are deaf and use ASL in their subject sample pools, and this marginalized population continues to be at disparity for health outcomes and also clinical research participation. One barrier is the unavailability of informed consent materials that are accessible in ASL. The current research study conducted by our team at the Center for Deaf Health Equity at Gallaudet University attempts to address the language barrier to the consent process through a careful reconsideration of its traditional English format and the development of an American Sign Language (ASL) informed consent app. This team successfully leveraged existing machine learning methods to develop a way to navigate and signature an informed consent process using ASL. We call this new method of navigation and signature "ASL consent." In our findings, we found that deaf people who are primarily college educated were more likely to agree that the process for obtaining ASL consent through an accessible app is comparable to traditional English consent.
38th Annual Assistive Technology Conference Scientific/Research Proceedings.
2023-01-01T00:00:00ZDevelopment of a Shoulder-Mounted Tactile Notification System for the Deaf and Hard of Hearing
http://hdl.handle.net/10211.3/225181
Development of a Shoulder-Mounted Tactile Notification System for the Deaf and Hard of Hearing
Murayama, Yuta; Emura, Rito; Tanaka, Shunya; Nakai, Yukiya; Shitara, Akihisa; Yoneyama, Fumio; Shiraishi, Yuhki
In this study, a shoulder-mounted tactile notification system is proposed and developed for the deaf and hard-of-hearing (DHH) to go out safely and secure operation. The vibration detection rate, correct response rate, and reaction time are investigated using four types of vibrations with input voltages of 1.0, 3.0, 5.0, and 7.0 V for 24 DHH people. Additionally, the location and vibration time of oscillators that satisfy the two conditions of "being able to perceive the movement as a single point" and "being able to recognize the direction quickly without getting lost" are investigated for six DHH people. The experimental results reveal that the parameters suitable for tactile presentation are extracted based on objective and subjective surveys.
38th Annual Assistive Technology Conference Scientific/Research Proceedings.
2023-01-01T00:00:00ZDeaf and Hearing Small Group Inclusive Communication System
http://hdl.handle.net/10211.3/225176
Deaf and Hearing Small Group Inclusive Communication System
Elglaly, Yasmine; Miller, Christa; Miller, Chreston; Patel, Rohan; Annapareddy, Spoorthy
Small groups of Deaf or hard-of-hearing (DHH) and hearing individuals find in-person communication challenging due to differences in preferred communication modalities (L. Elliot et al., Ntsongelwa and Rivera-Sánchez;). Groups of DHH and hearing individuals usually use systems with typing and speech recognition to communicate (Butler et al.; Glasser et al.; Mallory et al.; Stinson et al.; Marchetti et al.). However, these systems were often designed for asynchronous communication. To understand how design may address this gap, we conducted two studies. In study 1, we conducted semi-structured interviews and focus groups with 16 DHH and hearing participants. Study 1 informed the design of an inclusive group communication system, CollabAll. In study 2, we conducted a comparative study with 16 DHH and hearing participants to evaluate the benefits of CollabAll. Our empirical findings suggest that the ability to interject, quickly voice an opinion or challenge those who are holding the floor was a reoccurring communication need generally not available in existing group communication systems. The design of CollabAll facilitated interjecting using accessible buttons labeled with clear messages, e.g., Agree, Repeat, etc. The evaluation results indicated that group discussions are better structured and more efficient when texting is complemented with interjections support.
38th Annual Assistive Technology Conference Scientific/Research Proceedings.
2023-01-01T00:00:00ZEvaluation of Anonymized Sign Language Videos Filtered Using MediaPipe
http://hdl.handle.net/10211.3/225175
Evaluation of Anonymized Sign Language Videos Filtered Using MediaPipe
Luna, Andrew; Waller, James; Kushalnagar, Raja; Vogler, Christian
This study investigates the feasibility of using MediaPipe to anonymize sign language videos. Recent research has developed techniques for anonymizing the identity of a signer in a video, while preserving the signed message. Many of these prototypes are computationally intensive and are not currently useable for everyday automated real-time use. This gap MediaPipe, a tool developed by Google for tracking body movement in video, could be feasible for real-time anonymization, but has not yet been evaluated for its feasibility in sign anonymization. We fill this gap with a study in which deaf signers (n=10) view two filters developed using MediaPipe: a face mesh filter that covers only the face with an avatar-like face mask and a silhouette filter that covers the whole body in a solid monochrome, with interconnected dots showing the skeleton of the signer. Results show that signers are adept at understanding and reproducing short sentences covered by either filter. However, the filters are described as unnatural, and signers note facial movements are limited. We conclude that MediaPipe is likely robust enough for understanding manual information in signs but not necessarily for capturing facial information, and we suggest further improvements to the two filters.
38th Annual Assistive Technology Conference Scientific/Research Proceedings.
2023-01-01T00:00:00Z