From c0084c224bce56e4b40f4e965ccbf69fc745013f Mon Sep 17 00:00:00 2001
From: Tamsin Ewing This document is part of a set of modules or papers that describe accessibility issues for users with various disabilities that impact cognitive accessibility. See cognitive or learning disabilities issue papers for other modules. This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility Task Force (COGA TF), a joint task force of: This document is part of a set of modules or papers that describe accessibility issues for users with various disabilities that impact cognitive accessibility. For other modules, see Cognitive Accessibility Issue Papers. This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility (COGA) Task Force, a joint task force of the: the Accessible Platform Architectures Working Group (APA WG), and the Accessibility Guidelines Working Group (AG WG) of the Web Accessibility Initiative. Accessibility Guidelines (AG) Working Group of the Web Accessibility Initiative. This is an early draft. The task force intends to add more research and discussion and make editorial changes to comply with our style guide, including for citations. Please share your feedback, including any research we should consider adding to this document. This document is part of a set of modules or papers that describe accessibility issues for users with various disabilities that impact cognitive accessibility. See cognitive or learning disabilities issue papers for other modules. This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility Task Force (COGA TF), a joint task force of: Feedback on any aspect of the document is welcome. The APA and AG Working Groups particularly seek feedback on the following questions: Feedback on any aspect of the document is welcome. The APA and AG working groups particularly seek feedback on the following questions: To comment, file an issue in the W3C coga GitHub repository . You can also send an email to public-coga-comments@w3.org ( comment archive ). Comments are requested by 16 February 2026. In-progress updates to the document may be viewed in the publicly visible editors' draft . To comment, file an issue in the W3C coga GitHub repository. Create a separate GitHub issue for each topic, rather than commenting on multiple topics in a single issue. It is free to create a GitHub account to file issues. You can also send an email to public-coga-comments@w3.org (mail archive of previous comments). The deadline for comments is 16 February 2026. In-progress updates to the document may be viewed in the publicly visible Editors' Draft. Examples of specific disabilities that may require cognitive accessibility support include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyscalculia, mild cognitive impairment (MCI), Down syndrome, aphasia, and others.
Cognitive accessibility also supports benefit a broad range of users, including: Cognitive accessibility also benefits a broad range of users, including: It is worth noting that many crucial systems integrate voice systems and conversational interfaces, including emergency notifications, healthcare scheduling, prescription refilling, and more. With this in mind, full accessibility needs to be supported. An example use case of a voice system used in telephone self-service may be as follows: It is worth noting that many crucial systems integrate voice systems and conversational interfaces, such as emergency notifications, healthcare scheduling, and prescription refilling. With this in mind, full accessibility needs to be supported. The following is an example of a use case for a voice system used in telephone self-service: An example use case for a conversational interface may be as follows: The following is an example of a use case for a conversational interface: Voice systems are often implemented with the W3C VoiceXML 2.0 [voicexml20] standard and supporting standards from the Voice Browser Working Group . [VBWG] VoiceXML 2.0 has an appendix regarding accessibility that briefly discusses users with hearing and speech disabilities as well as WCAG and WAI specifications. Cognitive disabilities are mentioned once in a bullet about allowing users to control the length of time before a timeout. See VoiceXML 2.0 [voicexml20] Appendix H - Accessibility. Voice systems are often implemented with the W3C VoiceXML 2.0 Standard and supporting standards from the Voice Browser (VB) Working Group. VoiceXML 2.0 has an appendix regarding accessibility that briefly discusses users with hearing and speech disabilities, as well as Web Content Accessibility Guidelines (WCAG) and Web Accessibility Initiative (WAI) specifications. Cognitive disabilities are mentioned once in a bullet point about allowing users to control the length of time before a timeout. See Appendix H: Accessibility — VoiceXML 2.0. We are planning to expand this paper to include more information about challenges with internationalization and localization that users may encounter with voice systems and conversational interfaces. This document is part of a set of modules or papers that describe accessibility issues for users with various disabilities that impact cognitive accessibility. For other modules, see Cognitive Accessibility Issue Papers. This document is part of a set of modules that describe accessibility issues for users with various disabilities that impact cognitive accessibility. For other modules, see Cognitive Accessibility Issue Papers. This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility (COGA) Task Force, a joint task force of the: Accessibility Guidelines (AG) Working Group of the Web Accessibility Initiative. This is an early draft. The task force intends to add more research and discussion and make editorial changes to comply with our style guide, including for citations. Please share your feedback, including any research we should consider adding to this document. Feedback on any aspect of the document is welcome. The APA and AG working groups particularly seek feedback on the following questions: This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/. This is an early draft. The COGA Task Force intends to add more research and discussion, and make editorial changes to comply with our style guide, including for citations. Please share your feedback, including any research we should consider adding to this document. Feedback on any aspect of the document is welcome. The AG and APA working groups particularly seek feedback on the following questions: To comment, file an issue in the W3C coga GitHub repository. Create a separate GitHub issue for each topic, rather than commenting on multiple topics in a single issue. It is free to create a GitHub account to file issues. You can also send an email to public-coga-comments@w3.org (mail archive of previous comments). The deadline for comments is 16 February 2026. In-progress updates to the document may be viewed in the publicly visible Editors' Draft. In-progress updates to the document may be viewed in the publicly visible Editor's Draft. This document was produced by groups operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures (Cognitive and Learning Disabilities Accessibility Task Force), a public list of any patent disclosures (Accessible Platform Architectures Working Group), and a public list of any patent disclosures (Accessibility Guidelines Working Group) made in connection with the deliverables of each group — these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. This document is governed by the 18 August 2025 W3C Process Document. Voice systems and conversational interfaces can create a number of cognitive accessibility barriers. These technologies can create challenges due to heavy demands on memory and on the ability to understand and produce speech in real time. In particular, voice systems and VUIs can be inaccessible to people with disabilities that affect: Voice systems and conversational interfaces depend on the users’ knowledge and abilities. Many groups fall outside the norm in these categories. In these cases, the system often fails. Training and artificial intelligence (AI): User groups in this module's scope often have different speech patterns, vocabulary, impaired memory, executive function, and cognitive function. For the system to work well, it must be trained with a set of training content where different abilities and impairments are well represented. This includes long-term impairments such as mild cognitive impairment (MCI), learning disabilities, intellectual disabilities, and mood disorders, as well as temporary disabilities such as stress. Requirements, user needs, and functional needs: Similarly, when teams use user needs and functional needs in the product lifecycle, they often focus on peers or groups with biases in terms of cognitive abilities. For example, university students are often used for focus groups, but often do not represent cognitive and speech issues associated with aging. It is essential to include user needs from a diverse perspective beyond neurotypical audiences (See User Story and User Needs .) Testing: It is also essential to test with a wide group of users with diverse cognitive abilities to determine whether the system actually works as intended. Watch for increased levels of frustration, errors, and worsening of the users’ mood. General: The user needs to recall information to successfully interact with the system, such as activating phrases, as well as information presented by the system during the interaction. Voice Menu Systems: Menus that present several choices at once may pose challenges for users with disabilities related to working memory. Such systems require users to hold multiple pieces of information, such as the number associated with an option, while processing the terms that follow. This is true of systems that require either a voice response or a key press. Many designers assume that users can remember lists of about seven items. This assumes a typical working memory. People with impaired working memory can hold significantly fewer items simultaneously. As a result, they may not be able to use a system that requires them to compare items or remember numbers while processing words or directions. For example, the instruction “to speak to a nurse, press 2” stands by itself and does not require remembering anything before or after. Pausing between “to speak to a nurse” and “press 2” gives users time to decide if they want to speak to a nurse before they are given the rest of the instructions. The order of the instruction is important, and so is the pacing. The goal is to give users time to process the prompt and reduce the need for memory. Voice User Interfaces: For VUIs, users may be required to remember key phrases (such as activating phrases like “Hey, Google”) in order to operate successfully. Users who have difficulty recalling such phrases due to long-term memory impairments may not be able to operate the system. General: If a system response is too slow, a user with disabilities related to executive functioning may not know if their input was received and may press the key or speak again. Voice Menu Systems: The user needs to be able to decide when to act on a menu choice. If the user does not know how many options will be presented or if the system presents them too slowly, the user may make an incorrect choice based on partial information. Voice Menu Systems: The user may need to compare similar options such as "billing," "accounts," and "sales," and decide which one is best suited to accomplish their goal. Without additional context or prior knowledge, the user is likely to select the wrong menu option. General: If responses produced by a system are not provided in clear and accessible language, the user may have difficulty interpreting them. The user may not understand the response or know if they are using the system correctly. General: Systems that time out may not give users with disabilities that affect processing speed sufficient time to interpret information and formulate a response. Advertisements and additional, unrequested information also increase the amount of processing required. Voice Menu Systems: The user needs to focus on the different options and select the correct one. It can be challenging to focus on long or multi-level spoken menus without written counterparts. Advertisements or other unrequested information may also make it harder for users with attention-related disabilities to stay focused on the task they are trying to complete. General: The user needs to interpret the terms and choose the one that most closely matches the user’s goal. This involves speech perception, language understanding, and time limits. The sounds of language need to be heard, interpreted, and understood within a given time. Users with disabilities related to language and auditory perception may make mistakes in interpretation due to auditory-only input. We identified three issues to consider that apply to all kinds of voice systems or conversational interfaces: Timeouts: The user needs to formulate a spoken response to the prompt before the system "times out" and generates another prompt. For example: Users of assistive and augmentative communication devices (AAC) or speech-to-speech technologies may require significant additional time to respond before the system times out. Spontaneous speech: In directed dialog systems that guide the user through a series of predefined questions and responses, the user only needs to be able to speak a word or short phrase. However, the increasing use of natural language systems means the user may need to describe their issue in their own words. This feature is an advantage for some users because it does not require them to remember menu options. But it can be problematic for users with disabilities that impact their ability to produce spontaneous speech, such as people with aphasia or autistic people for whom stress may impact spoken communication. Speech recognition: Speech recognition systems might not work well for people whose speech sounds different. Users may not be able to interact with a system that requires verbal input but does not recognize their speech. This affects many groups such as users with MCI or Down syndrome. General: Mental health, such as anxiety, may also impact a user’s ability to interact with voice systems or conversational interfaces. High demands on cognitive load, negative experiences with technology, and interruptions can exacerbate anxiety or frustration, and decrease a user’s ability to interact with a system. Note that many people requiring cognitive accessibility supports find their skills are reduced as anxiety and cognitive load increases. This list of user needs is not complete. We are also seeking feedback on the format for presenting user needs. As a user who has cognitive accessibility needs, I need to get human help, without going through a complex menu system (VoiceXML [voicexml30]) or a complex voice recognition menu system that relies on memory and executive function , so that I can set an appointment or find out some information. As a user who has cognitive accessibility needs, I need to use the system without relying on a good memory, learning new information, or dealing with distraction or cognitive overload. For users who are unable to use the automated voice system, it must be possible to reach a human through an easy transfer process rather than being directed to call another phone number. For telephone self-service systems, there should be a reserved digit for requesting a human operator. The most common digit used for this purpose is "0." However, if another digit is already in widespread use in a particular country, then that digit should always be available to get to a human agent. Systems should avoid making users reach an agent through the use of complex digit combinations. This could be enforced by requiring implementations to Other numbers can be used for special actions too, but there shouldn’t be too many—too many rules can be confusing and hard to remember. Also, repeating these options too often can be distracting. Human help should be trained to support users with disabilities. Too often, human help places users back into the system they did not manage. Make it easy to set user preferences when available, such as adjusting the settings by using natural language ("Slow down!"). Examples of customization include: Error recovery should be simple and take the user to a human operator. Error response should not end the interaction or lead to a more complex menu. Keypad or telephone-based systems should use a reserved digit to help with error recovery. Example: Setting a default for a human operator as the number 0. Give prompts in ways that reduce the cognitive load. For example, the prompt "press 1 for the help desk," requires the user to remember the digit 1 while interpreting the term “help desk.” This wording is harder to process than the prompt "for the help desk (pause): press 1" or "for the help desk (pause) or for more help (pause): press 1." Avoid additional information such as special offers or extra text that does not support the task directly. For example, the following sentences can cause anxiety or add to the users’ cognitive load: Natural language understanding (NLU) or natural language interpretation (NLI) and AI systems allow users to state their requests in their own words, which can help users who have difficulty remembering menu options or mapping the menu options to their goals. However, the following should be taken into account: Standard best practices in VUI apply to users with disabilities that impact cognitive accessibility, and should be followed, such as those provided by the Association for Conversational Interaction Design Wiki [ ACIxD ] or ETSI ETR 096 [ETSI-ETR-096]. Some examples of generally accepted best practices in VUI design include: See the ACIxD wiki [ACIxD] for additional recommendations and details. Some specific best practices for users with disabilities that impact cognitive accessibility include: For example, in Section 255 [Title42-section255] of the U.S. Telecommunications Act, paragraph 1193.41 Input, Controls, and Mechanical Functions clauses (g), (h) and (i) apply to cognitive disabilities and require that equipment should be operable without time-dependent controls, the ability to speak, and should be operable by persons with limited cognitive skills. Recent technological developments may be helpful for users with disabilities that require cognitive accessibility supports. Internet use and apps can create a number of risks for people requiring cognitive accessibility. This module/paper covers safety issues for these users, including cybercrime, mental health, wellbeing, privacy, and more. Internet use and apps can create a number of risks for people requiring cognitive accessibility. This module covers safety issues for these users, including cybercrime, mental health, wellbeing, and privacy. This module: This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility Task Force (COGA TF), a joint task force of the Accessible Platform Architectures Working Group (APA WG) and the Accessibility Guidelines Working Group (AG WG) of the Web Accessibility Initiative. This document is part of a set of modules that describe accessibility issues for users with various disabilities that impact cognitive accessibility. For other modules, see Cognitive Accessibility Issue Papers. This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility (COGA) Task Force, a joint task force of the: This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/. This is an early draft. The Task Force intends to add more research and improved discussion. Feedback on any aspect of the document is accepted. For this publication, the Working Groups particularly seek feedback on the following questions: This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/. This is an early draft. The COGA Task Force intends to add more research and discussion, and make editorial changes to comply with our style guide, including for citations. Please share your feedback, including any research we should consider adding to this document. Feedback on any aspect of the document is welcome. The AG and APA working groups particularly seek feedback on the following questions: To comment, file an issue in the W3C coga GitHub repository. If this is not feasible, send email to public-coga-comments@w3.org (comment archive). Comments are requested by 16 February 2026. In-progress updates to the document may be viewed in the publicly visible editors' draft. This document was published by the Cognitive and Learning Disabilities Accessibility Task Force, the Accessible Platform Architectures Working Group, and the Accessibility Guidelines Working Group as an Editor's Draft. Comments regarding this document are welcome. Please send them to public-coga-comments@w3.org (archives). Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress. This document was produced by groups operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures (Cognitive and Learning Disabilities Accessibility Task Force), a public list of any patent disclosures (Accessible Platform Architectures Working Group), and a public list of any patent disclosures (Accessibility Guidelines Working Group) made in connection with the deliverables of each group; these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. This document is governed by the 1 March 2019 W3C Process Document. To comment, file an issue in the W3C coga GitHub repository. Create a separate GitHub issue for each topic, rather than commenting on multiple topics in a single issue. It is free to create a GitHub account to file issues. You can also send an email to public-coga-comments@w3.org (mail archive of previous comments). The deadline for comments is 16 February 2026. In-progress updates to the document may be viewed in the publicly visible Editor's Draft. This document was produced by groups operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures (Cognitive and Learning Disabilities Accessibility Task Force), a public list of any patent disclosures (Accessible Platform Architectures Working Group), and a public list of any patent disclosures (Accessibility Guidelines Working Group) made in connection with the deliverables of each group — these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. This document is governed by the 18 August 2025 W3C Process Document. Examples of specific disabilities include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia and dyscalculia, mild cognitive impairment (MCI), Down syndrome, asphasia, and others.
+ Examples of specific disabilities include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia and dyscalculia, mild cognitive impairment (MCI), Down syndrome, and asphasia.
Examples of cybercrime include: People with cognitive and learning disabilities are at higher risk and may be unable to take the recommended safety precautions. Not all activities that affect people's safety are illegal. Personal information can be stored and sold to third parties or stolen. This information can be used to hurt the user, including through financial or emotional exploitation. This can include exploiting the user for marketing or to increase click throughs. Fear of these dangers can prevent people from using apps and websites that may be valuable to them. For example, mental health apps are a potentially important part of health care. However, research shows that many people do not use mental health services via the web or apps due to concerns about whether mental health information is kept private and how their data is used [RM-Lipschitz1]. Other research shows people may avoid apps that make them frustrated with the complexity and cognitive load. This deprives users of support and services. It also means fewer people with cognitive and learning disabilities and mental health conditions are in the data surrounding these services. This makes many groups invisible or underrepresented in data-driven decisions. In this issue paper, we lay out potential safety issues for people with cognitive and learning disabilities. Proposed solutions include integrating processes that support safety and user control to ensure mental health, wellbeing, supported consent and decision making, privacy and more. Such a solution could be part of its own certification process. Other research shows people may avoid apps that make them frustrated with the complexity and cognitive load. This deprives users of support and services. It also means fewer people with cognitive and learning disabilities and mental health conditions are reflected in the data surrounding these services. This makes many groups invisible or underrepresented in data-driven decisions. In this module, we lay out potential safety issues for people with cognitive and learning disabilities. Proposed solutions include integrating processes that support safety and user control to ensure mental health, wellbeing, supported consent and decision making, privacy and more. Such a solution could be part of its own certification process. Potentially rewriting this section to focus on the issues on development end that lead to risk/vulnerabilities.
People with cognitive and learning disabilities may not be able to easily use some of the common security measures used on the Web such as two-factor authentication and complex, unique passwords for each login [RM-Ophoff1]. Extra security precautions to increase password strength often make this group more vulnerable to "human error". This can encourage risky behavior such as keeping a list of passwords on a desk that can be viewed by anyone with physical access to the room. Also, when users ask for assistance to complete these security procedures, they can put themselves at high risk of being abused by those they trust to help. These cyber-criminals use deception to gain trust. This enables them to negatively influence the behavior of vulnerable individuals. People with cognitive and learning disabilities who experience difficulty understanding social cues may fail to accurately identify a potentially harmful situation. Those who have difficulty understanding others may be more trusting and more easily believe false information. Also, people with disabilities affecting reasoning, attention or memory may be similarly vulnerable to these situations, as they may struggle to validate presented information. People with cognitive and learning disabilities may be more at risk of being a victim of a sexual crime. Some potential reasons for this include: Personalization is important, especially as a way to avoid conflict when meeting varying user needs. However, there is a significant risk that if poorly implemented, user information and vulnerabilities can be exposed. This puts the most vulnerable users of this population at the greatest risk. Algorithms are computer programs that try to solve a specific problem or achieve a goal. (Goals are set by the programmers and company they work for). Currently, computer algorithms are increasingly responsible for automated decision-making. These decisions use information from tracking our everyday behaviors online and via apps. This data is often sold to or used by companies, including organizations providing critical services such as employment, health care, and credit. This creates multiple problems. Many users know that they are at risk of scams and data misuse, and are therefore afraid of using digital services that they may need. All users in a recent study had concerns about privacy and their data being inappropriately shared [RM-Rotondi1]. The misuse of algorithms has the potential to cause financial harm to people with cognitive and learning disabilities and mental health conditions. As an example, a patent could be filed that included an algorithm that could help financial institutions analyze a person's social network and use that data with regard to granting a person's loan application. [RM-O'Neil1] People with cognitive and learning disabilities may be vulnerable to risks related to algorithms and big data. This includes errors and biases in the algorithm and problems with privacy [RM-Venkit1] [RM-Moura1] [RM-Monteith1]. In another use case, Internet of things (IoT) devices are involved more and more in providing medical care. This can include support for mental illness and cognitive support. Massive amounts of data from as many digital activities through devices as possible are collected. This erodes people's knowledge of what is public, as much of this data is done in homes and other private settings. This data is then used to drive decisions such as loans, insurance coverage, and more. Privacy becomes more important for individuals with mental health diagnoses and cognitive disabilities, especially where there is prejudice and stigma [RM-Monteith1]. For example, people who are victims of domestic violence may avoid putting their picture online because they do not want to be easily found, but automated programs for job applications may be biased against profiles without a picture. This section focuses on research from the Mental Health subgroup. More research is needed on social media and cognitive and learning disabilities. Prolonged time spent on social media platforms appears to contribute to increased risk for a variety of mental health symptoms and poor wellbeing. This may partly be driven by the detrimental effects of screen time generally on mental health. Research suggests social media usage can cause increased severity of anxiety and depressive symptoms [RM-Hardy1]. Recent studies have reported negative effects of social media use on mental health of young people particularly. Negative outcomes include social comparison pressure with others (sometimes called social media envy), rumination, depressive symptoms, anxiety, greater feeling of social isolation after being rejected by others on social media, and body image issues (especially in teens) [RM-Ang1][RM-Karim1]. For preteen girls, time spent on social networking sites produced stronger correlations with body image concern than did overall Internet exposure. This may lead to reduced self esteem, dieting and eating disorders. With overexposure to social media, preteens are highly likely to be exposed to material that they neither fully understand nor are equipped to evaluate sufficiently critically. Similarly, in another study with adults over 30, social media use seemed to correlate with increased anxiety[RM-Tiggemann1] [RM-Hardy1]. For preteen girls, time spent on social networking sites produced stronger correlations with body image concern than did overall internet exposure. This may lead to reduced self esteem, dieting and eating disorders. With overexposure to social media, preteens are highly likely to be exposed to material that they neither fully understand nor are equipped to evaluate sufficiently critically. Similarly, in another study with adults over 30, social media use seemed to correlate with increased anxiety[RM-Tiggemann1] [RM-Hardy1]. Social media does not seem to have a consistent affect on mental health, and many factors can come into play. The relationship between social media and mental well-being may be affected by age. However, some studies have shown the opposite effect for other groups. For adults aged 18–29, social media use decreases anxiety. This is a counterpoint, where sometimes social media reduces the incidence of anxiety , especially for younger people [RM-Hardy1]. One study conducted with nurses argues that social networking site addiction stimulates various stressors such as envy, social anxiety and rumination. These stressors can lead to task distraction. A preference for online social interaction was also found to be linked to social anxiety and loneliness.[RM-Majid1][RM-Caplan1] Social networking sites have also been associated with likelihood of diminished future well-being and reduced self esteem. Also people with signs of depression linked to social networking sites spend even more time on social networking sites, leading to a vicious circle of depression [RM-Shakya1]. On the other hand, social media does not seem to have a consistent affect on mental health, and many factors can come into play. The relationship between social media and mental well-being may be affected by age. However, some studies have shown the opposite effect for other groups. For adults aged 18–29, social media use correlates with decreased anxiety.This is a counterpoint, where sometimes social media reduces the incidence, especially for younger people [RM-Hardy1]. Further, social media has become an important part of the lives of many people with mental health disabilities. Many of these individuals use social media to share their lived experiences with mental illness, to seek support from others, and to search for information about treatment recommendations, accessing mental health services and coping with symptoms. Overall, although time spent on the Internet was found to be negatively associated with mental health, some activities, such as school work, were positively associated. Additionally, studies have also shown social media helped teens cope with isolation in lockdown/covid. [RM-Naslund1][RM-Hökby1][RM-Cauberghe1] Further, social media has become an important part of the lives of many people with mental health disabilities. Many of these individuals use social media to share their lived experiences with mental illness, to seek support from others, and to search for information about treatment recommendations, accessing mental health services and coping with symptoms. Overall, although time spent on the internet was found to be negatively associated with mental health, some activities, such as school work, were positively associated. Additionally, studies have also shown social media helped teens cope with isolation in lockdown/covid. [RM-Naslund1][RM-Hökby1][RM-Cauberghe1] Online social participation can alleviate the negative effects of pain on mental well-being. Social Media may serve as a source of connection for people who are not able to participate in face-to-face interactions as much as they would like due to pain, disability, or age. Social media can support people with social anxiety and act as a safe form to increase human interaction and make friendships (assuming web sites support cognitive accessibility and social anxiety) [RM-Rotondi2]. It is also worth noting that many of the studies reviewed were scientifically weak. For example, studies often have small samples, effects that are not scientifically significant, or show correlation rather than causality. For example, the correlation between social media use and mental health issues may be that users use social media to help with mental health issues. More robust research is needed in this area in order to develop solutions that enable use with less risk [RM-Odgers1]. Further, people have reported (in response to this paper) that they are afraid of other long term consequences of posting online. For example, if they post about their disability (mental health issues), or if their posts are considered inappropriate or imply health issues, then they worry that it can be used against them or to harm them after recovery has started. Editor’s Note: This section focuses on research from the Mental Health subgroup. More research is needed on AI and curated content and cognitive and learning disability. Algorithms and artificial intelligence can make all the above issues worse. Artificial intelligence (AI) and cognitive computing can empower algorithms by learning. This includes learning the individual users' behaviors. For example, an algorithm may have a goal to increase the time the user spends on a web site or application. AI could then continuously adapt the algorithm to be more effective for each individual user. In our example they will be learning what makes you more likely to click on a link. Unfortunately, this is often anger, fear, and other negative emotions. This may have two main effects on the mental health of the user: Algorithms often adapt and tailor content to the individual. Often the algorithm changes the content so that the user stays on the site for longer and is more likely to click on links and advertisements. The effect is curated content. Curated content also creates “Echo Chambers,” or environments where you see content that you disproportionately agree with. The user is unlikely to be exposed to different points of view. Users may not be aware of the goals, settings and biases of the algorithms, making it harder to realize their isolation. Issues that can get reinforced include unhealthy eating practices, such as eating disorders and antisocial behavior. Anxieties are also often reinforced by being exposed to many voices expressing similar views, such as conspiracy theorists. While this also occurs offline, on the web it happens faster and to a more extreme extent [RM-Tiggemann1]. When groups are exposed to large amounts of content that disproportionately represents one point of view, and for the sake of clicks, encourages negative reactions to people outside the group, it is more difficult for people to live together and respect differences. Conditions that impact emotional regulation may also affect how people act online, and make their reactions more extreme [RM-Yu1]. For example, mental health conditions may impact spending. Further frustration from difficult interfaces can also cause stress and impair decision-making. When under stress, people can become more impulsive. Research on decision-making shows stress leads to fast intuitive decision-making over slower, more logical or analytical processes [RC-M1]. For example the internet may enable a person with bipolar disorder to make substantial financial commitments whilst in the manic phase, often with much greater ease than in the physical world. Further, some learning and cognitive disabilities impact spending habits and money management. For example a person with dyscalculia may be unable to tell the steep price difference between similar options. In the real world, they may use cash when shopping to limit the effect. Risks to users seem to increase with factors such as lower levels of sociability offline, loneliness, anxiety and depression, poorer insight, judgment, discrimination and ability to detect deception online, and reduced experience and life opportunities. Perceived high online risk may lead to gatekeeping restrictions and controlling digital access. Solutions include supportive communities and supported decision-making (see issue paper on supported decision-making) However, solution providers must be careful to reinforce the users rights. Restriction may affect online self-determination, participation and development by people with intellectual disabilities and others. There is also a significant risk of solutions being misused to control and oppress marginalized or vulnerable people. Providers should be very cautious about gatekeeping in locations where human rights for everyone are not well established and enforced. This paper only advocates providing tools of autonomy that can not easily be used as tools of repression, marginalisation or isolation. Solutions must be careful to not unnecessarily restrict access and autonomy for vulnerable groups without true consent [RM-Chadwick1]. Overuse of some internet features such as gaming can sometimes reach the level of addiction. Internet addiction disorder (IAD), or Internet gaming disorder (IGD) describes conditions in which people develop a problematic, compulsive use of the internet (mainly gaming) that causes some significant impairment in different aspects of their life over a prolonged period of time. This includes neglecting other important areas of life, (work, school, family,) or lying to hide the extent of internet use. Note that while not everyone agrees with this as a clinical diagnosis, the effect is well established [RC-IAD1] [RM-Kim1]. There is an association between Internet or gaming addiction and serious psychiatric symptoms such as somatization, sensitivity, depression, anxiety, aggression, phobias, and psychosis. This was after controlling for age, sex, education level, marital status, and type of universities [RM-Hökby1]. There is an association between internet or gaming addiction and serious psychiatric symptoms such as somatization, sensitivity, depression, anxiety, aggression, phobias, and psychosis. This was after controlling for age, sex, education level, marital status, and type of universities [RM-Hökby1]. In other research, people with schizophrenia or schizoaffective disorder using digital technology report negative feelings “often” or “very often” 56% (255/457). This included feelings of being unable to stop (27%, 123/457), frustration (25%, 114/457), paranoia (24%, 110/457), worry (20%, 91/457), sadness (20%, 91/457), anger (19%, 87/457), mania (16%, 73/457), or envy (16%, 73/457) [RM-Gay1]. More and more decision making processes are based on data that is gathered from phone use and apps, such as smart city infrastructure and traffic allocations. This makes many groups under-represented or even entirely excluded in data-driven decisions. For example, decisions on parking spaces may be based on data from a parking app. People with disabilities may find the app confusing or unusable. They may pay with cash. The data collected by the parking app does not include their patterns or needs and they are left out of the data. When they go to events, their parking needs are not monitored. This can also occur for different public facilities as decision making becomes more data driven. Vertical and horizontal analysis of data needs to be performed to see what groups are missing in the data set, including the different sub-groups of learning cognitive, physical and mental health related disabilities. Additionally, as discussed above, many people do not use apps for mental health services and cognitive support due to concerns about how their data is used and whether mental health information will be kept private. Other reasons include the complexity and frustration experienced when these apps are not designed with cognitive learning and emotional disabilities in mind [RM-Lipschitz1]. This list of user needs is not complete. We are also seeking feedback on the format for presenting user needs.
-
-
-Introduction
@@ -132,7 +123,7 @@ Disabilities that may require cognitive accessibility support
- Voice systems and conversational interfaces
Voice User Interfaces (VUIs) or conversational interfaces such as Amazon Alexa, Cortana, Google Assistant, or Siri. These interfaces tend to use voice recognition, natural language processing (NLP), and artificial intelligence (AI) to understand and respond to users.
-
-
-
Introduction
@@ -162,7 +164,7 @@ Voice systems and conversational interfaces
Challenges for People with Disabilities that Impact Cognitive Accessibility
+Challenges for people with disabilities that impact cognitive accessibility
-Challenges for People with Disabilities that Impact Cognitive Accessibility<
mental health.
General Concerns
+ General concerns
Memory: Recalling and Responding to Prompts
+ Memory: Recalling and responding to prompts
Memory: Recalling and Responding to Prompts
Executive Function: Deciding When to Respond
+ Executive function: Deciding when to respond
Reasoning: Making the Correct Selection
+ Reasoning: Making the correct selection
Knowledge: Interpreting Responses
+ Knowledge: Interpreting responses
Processing Speed
+ Processing speed
Attention: Making Correct Selections
+ Attention: Making correct selections
Language and Auditory Perception: Correctly Interpreting Spoken Information
+ Language and auditory perception: Correctly interpreting spoken information
Speech and Language Production: Responding to Voice and Speech Recognition Systems
+ Speech and language production: Responding to voice and speech recognition systems
Mental Health: Interacting with Voice Systems and Conversational Interfaces
+ Mental health: Interacting with voice systems and conversational interfaces
User Story and User Needs
+ User story and user needs
Related user needs
I need to know what the next steps are and when I am finished.
- Related Personas
+ Related personas
Related Personas
Proposed Solutions Based on Current Research
- Ensure Human Backup is Available and Reachable
+Proposed solutions based on current research
+ Ensure human backup is available and reachable
Ensure Human Backup is Available and Reachable
Allow Users To Change Settings
+ Allow users To change settings
- Allow Users To Change Settings
Consider customizable voices, including regional and standard accents, to create familiarity for the user.
Simplify Error Recovery
+ Simplify error recovery
Design Prompts to Reduce Cognitive Load
+ Design prompts to reduce cognitive load
Remove Unnecessary Words
+ Remove unnecessary words
- Remove Unnecessary Words
"Avoid unnecessary delays by answering the following questions quickly and correctly."
Ensure Voice and Speech Recognition for All Users
+ Ensure voice and speech recognition for all users
- Ensure Voice and Speech Recognition for All Users
Use existing standards for speech recognition if they’re available for the languages your system supports. For example, the European Telecommunications Standards Institute (ETSI) [ETSI-ES202-076] has a standard for voice commands in many European languages. When using existing standards, keep in mind that asking users to remember too many commands can be overwhelming.
Follow Best Practices in General Voice Interaction Design
+ Follow best practices in general voice interaction design
@@ -418,7 +420,7 @@
Follow Best Practices in General Voice Interaction Design
Follow Best Practices in Cognitively Accessible VUI Design
+ Follow best practices in cognitively accessible VUI design
- Follow Best Practices in Cognitively Accessible VUI Design
Task overview: Provide a way to attain a list of all the steps required and information needed to complete the task or process.
Follow Appropriate Legislative Requirements
+ Follow appropriate legislative requirements
Integrate New Technology-Based Solutions
+ Integrate new technology-based solutions
Abstract
-
-Abstract
designers, content creators, and developers who wish to have a more in-depth understanding of inclusion issues.
Status of This Document
-
-Status of This Document
Should additional user needs be addressed?
Introduction
@@ -105,41 +112,41 @@ Status of This Document
mental health disabilities, and
People with disabilities that require cognitive accessibility support can benefit from using the internet. However, there are risks as all users are at risk of experiencing activities like fraud, terrorism, extortion, harassment and hacking.These activities are collectively referred to as cybercrime [RM-Hökby1].
Safety Issues for People with Cognitive disabilities
+ Safety issues for people with cognitive disabilities
Cybercrime
Hackers and Identity Theft
+Hackers and identity theft
Con Artists and Deception
+Con artists and deception
Sexual predators and Physical Safety
+Sexual predators and physical safety
-Sexual predators and Physical Safety
they are less likely to identify unreasonable requests.
Data, Algorithms and privacy
+Data, algorithms and privacy
Personalization
Algorithms and Privacy
+Algorithms and privacy
Social media and Mental Health
+Social media and mental health
Longevity of posts
Algorithms, AI (Artificial Intelligence) and Curated Content
+Algorithms, AI (artificial intelligence) and curated content
Curated content
Spending and Financial Commitments
+Spending and financial commitments
Gatekeeping and Restricting Digital Access
+Gatekeeping and restricting digital access
Internet or Gaming Addiction and Overuse
+Internet or gaming addiction and overuse
Poor Representation in Data Sets
+Poor representation in data sets
User Story and User Needs
+ User story and user needs
User Story and User Needs
Safety should be a priority when making content accessible for people with cognitive and learning disabilities. All user information must be kept safe, to the fullest extent possible. Any clues that the user has cognitive disabilities, such as a request for a simplified version, should be protected information.
Personalization systems should be designed so that any information implying vulnerabilities are on the user device and are secure.
Technical designs that may support privacy include
@@ -296,8 +303,8 @@Supported decision making (SDM) are functions that help people with disabilities make decisions as independently as possible whilst staying safe. This includes choosing helpers to help them make choices. See links including;
+Supported decision-making (SDM) are functions that help people with disabilities make decisions as independently as possible whilst staying safe. This includes choosing helpers to help them make choices. See links including;
The helpers can be, friends, family members or professionals. Supported decision making helps people avoid a formal guardian . SDM needs to adapt to the changing needs of the individual. Please see the solutions suggested in the supported decision making issue paper (See supported decision making issue paper - temporary link)
-For social media, provide a graded range of solutions including:
Security should be strong and easily used by those with cognitive disabilities, such as a biometrics option. For a full discussion see the issue paper on security.
-Social media refers broadly to web and mobile platforms that allow individuals to connect with others within a virtual network (such as Facebook, Twitter, Instagram, Snapchat, or LinkedIn), where they can share, co-create, or exchange various forms of digital content, including information, messages, photos, or videos (Ahmed et al. 2019)
Algorithms, refers broadly to the logic often used to run computer programs.
Thanks to Crimes against Children Investigations Israel National Cyber Unit for the review. Interviewed by Lisa seeman-horvitz