Plain language (Minimum) #30
Comments
Assigned to Jim Smith (@jim-work) https://www.w3.org/WAI/GL/wiki/SC_Managers_Phase1 |
@jim-work Is there a PR ready to go for this? |
Pull request #106 |
This is difficult to measure and to implement. I recommend looking at using reading level. It isn't perfect, but it addresses most of the user needs identified, especially when paired with existing Technique G153. Reading level has international support, it has automated tests, and it has a variety of formulas (Flesh-Kincaid is the oldest and best know, there are many others like the Dale-Chall list, which provides 3000 common simple words.) Proposed revision: |
Short Text
I’m curious about this qualification. It suggests that an error message that doesn’t require a response is an exception. What is the rationale for this?
I don’t think it is necessary to have this ‘see exceptions’ text here. Perhaps it's an editorial comment for the review?
Is there a reason to describe what you don’t want instead of what you do want? How about “Words are used according to their proper meanings or definitions. Metaphors and figurative language are not used unless they can be automatically replaced with literal text based on user settings.”
G131 already covers this to meet both 2.4.6 Headings and Labels and 3.3.2 Labels or Instructions. I do not see the point of regurgitating this.
As with the control material, I question whether this isn’t already covered elsewhere – or if it could be incorporated into current SC without significant impact. As well, I would argue that regardless of the potential to work any new requirement into 2.4.6, this also seems to be fully covered by the criteria you are proposing in Task Completion. There should be some level of normalization between techniques – ideally only one technique should capture an issue, not many.
Not sure why this SC is defining "easily available" since it is not used anywhere in the text except listed as a possible technique Testability
I don’t think it would be accurate enough that one would be able to automatically indicate a Violation from an automated test. I suspect such triggered failures would need to be Potential Violations, still subject to human review. I also question many web-content creator’s abilities to fully understand tense and voice.
There are many word lists. Which one is the one someone needs to test against? Which one is the one that will fail? We know that any measure that can be disputed will be, and that it can lead to risk. No solution offered, but this will be a hard sell. Also, are “frequently used” words 100% correlated with “clear” or “simple” words? Techniques
Troubled that there are techniques listed for pronouns and symbols, when neither are mentioned in this SC. |
agree with @mbgower on the wording changes. Small change: Words are used according to their common meanings or definitions. Metaphors and figurative language are not used unless they can be automatically replaced with literal text based on user settings @jspellman this has been discussed. the reading age thing does not work in this context and makes it easier to read but nt understand. It will not solve the cases and examples brought in the discription. This may require accessibility experts aquiring new skills and buying new tools. I feel that is OK |
full new proposal Plain language: Provide clear and simple language in instructions, labels, navigational elements, and error messages, which require a response to continue, so that all of the following are true. For instructions, labels, and navigational elements:
Also on controls:
Also on instructions:
Exceptions:
|
How can you close this? You've barely responded to any of the points raised. |
You now have "common" used in two separate bullets. I think I get why you would not want to use "proper" but it creates overlap to use "common" twice. As well, many metaphorical uses of words are common uses. I suggest using "literal" instead, or finding a better alternative. |
I believe this is your response to Jean's "This is difficult to measure and to implement" comment? Without going into a discourse, I'll just say that many of the COGA candidates are not resolvable at the stage where most accessibility scrutiny currently takes place. Trying to improve and broaden the accessibility of content is our key goal, but let's not downplay the challenges posed by some of the candidates in their present form. A whole lot more than accessibility experts will need to acquire new skills to achieve and verify ones such as this. |
I think there is a fundamental issue with all the plain language success criteria that have been proposed which has nothing to do with the availability (or lack of) of automated tools.
The good thing about using passive voice here is that the main person (you, the one receiving the advice) is clearly in focus grammatically as being the subject of the sentence. Take another example that could appear in an instructional text:
Turning this example into active voice "Scientists do not know much about this disease" arguably makes the sentence more complex because it forces the appearance of a subject that is not helpful - the same might be true not just for scientists but also for doctors, policy makers etc. So what is the problem? Testers identifying passive voice may feel encouraged to fail content if they are not sure that one of the exceptions applies. Also:
Controls:
I think we agreed in yesterday's telco that controls might therefore be taken out of the scope of this SC because of that, but that is up to the COGa TF to decide. Detlev |
@mbgower The pull request was made before the most of the comments were made. We were just late closing the thread and then the discussion started again. The discussion is meant to move to the pull request - at least I think so. TO be honest I find it all quite confusing. |
@joshueoconnor @marcjohlic @detlevhfischer I would like to continue discing this on the lisr and on the coga call tomorow. You are invited to join us, so we can work out the right wording. |
Looking over the thread, it looks like the pull request was made before there was any discussion. I don't see the value of trying to move something forward without any vetting. There seems to be an element of panic going on in trying to push the COGA candidates through, sort of an 'Anything is better than nothing' attitude. That's a false dichotomy. Scrutiny and conversation will advance and refine the issues towards incorporation. Trying to move draft proposals in bulk to the next stage without addressing contentious and unresolved issues ("kicking the can down the road") is not a good strategy.
It looks like the pull request has also been closed. Where are we supposed to post questions and issues for this topic now?
I share some of that confusion. But I don't think either of us give up easily, so let's adapt the system for ourselves to make it work. |
I feel I have said all what I had to say.
For me, this SC really seems to be at the level of 'best practice' and not at the level where some tester will fail content because he or she thinks that a word is uncommon or there is a passive clause which might better be turned into an active clause. I find this too intrusive and too rigid.
Having said that, I appreciate the minimum variant of this SC is constrained to instructions, and I agree these should be as plain as possible. I think I imagine being an author and feeling upset at being shackled by prescriptions that may not do justice to my particular task at hand - so I admit this is partly a gut reaction and as such, something to be put in perspective by others and by different requirements.
Detlev
Sent from phone
… Am 19.02.2017 um 08:49 schrieb Lisa Seeman ***@***.***>:
@joshueoconnor @marcjohlic @detlevhfischer I would like to continue discing this on the lisr and on the coga call tomorow. You are invited to join us, so we can work out the right wording.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Context-specific constraint to 1500 most common words:
I was just evaluating a site which has a question and answer section on a life-threatening disease. As evaluator of 'Plain Language (minimum)', I would have to decide whether this section falls under instructions. (It probably does, but as evaluator I would be sorely tempted to consider it out of scope.)
So assuming that I treat it as 'instruction': The context of the disease means that all sorts of medical terms come into play - there is no chance of dealing adequately with the concerns of patients looking for answers within a specific set of 1.500 words, or simplify these terms in a non-confusing way. Would I as an evaluator be entitled to call upon the exception
Where less-common words are found to be easier to understand for the audience. Such findings are supported by user testing that includes users with cognitive disabilities
Possibly, even though these terms are far from easy to understand. They are just necessary to map onto the diagnoses people will have received. But user testing won't be available to me as evaluator (and is ruled out if going by the Cfc regarding this issue).
Just one example to show what kind of issues we get into if this were to become an AA SC.
Detlev
Sent from phone
… Am 19.02.2017 um 17:06 schrieb Mike Gower ***@***.***>:
The pull request was made before the most of the comments were made. We were just late closing the thread and then the discussion started again.
Looking over the thread, it looks like the pull request was made before there was any discussion. I don't see the value of trying to move something forward without any vetting. There seems to be an element of panic going on in trying to push the COGA candidates through, sort of an 'Anything is better than nothing' attitude. That's a false dichotomy. Scrutiny and conversation will advance and refine the issues towards incorporation. Trying to move draft proposals in bulk to the next stage without addressing contentious and unresolved issues ("kicking the can down the road") is not a good strategy.
The discussion is meant to move to the pull request
It looks like the pull request has also been closed. Where are we supposed to post questions and issues for this topic now?
TO be honest I find it all quite confusing.
I share some of that confusion. But I don't think either of us give up easily, so let's adapt the system for ourselves to make it work.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@detlev This is a good example and has been addressed in a few ways
1. in the SC itself you can use the common way to refer to a concept in this context. So the medical terms would be fine if they qualify. We are anticipating the tools that will be able to generate the word list (It would take me about a week to program that one) but, just incase we do not have it by the time we get to SC we were excluding instructions of over 300 words until there are adequate tools.
2. You can use what ever words you want and put the simple language in the title, or the coga-easylang etc. An easy to access glossary could also be an acceptable technique.
Between the two I think we have it more then covered.
All the best
Lisa Seeman
LinkedIn, Twitter
…---- On Mon, 20 Feb 2017 00:26:06 +0200 Detlev Fischer <notifications@github.com> wrote ----
Context-specific constraint to 1500 most common words:
I was just evaluating a site which has a question and answer section on a life-threatening disease. As evaluator of 'Plain Language (minimum)', I would have to decide whether this section falls under instructions. (It probably does, but as evaluator I would be sorely tempted to consider it out of scope.)
So assuming that I treat it as 'instruction': The context of the disease means that all sorts of medical terms come into play - there is no chance of dealing adequately with the concerns of patients looking for answers within a specific set of 1.500 words, or simplify these terms in a non-confusing way. Would I as an evaluator be entitled to call upon the exception
Where less-common words are found to be easier to understand for the audience. Such findings are supported by user testing that includes users with cognitive disabilities
Possibly, even though these terms are far from easy to understand. They are just necessary to map onto the diagnoses people will have received. But user testing won't be available to me as evaluator (and is ruled out if going by the Cfc regarding this issue).
Just one example to show what kind of issues we get into if this were to become an AA SC.
Detlev
Sent from phone
> Am 19.02.2017 um 17:06 schrieb Mike Gower <notifications@github.com>:
>
> The pull request was made before the most of the comments were made. We were just late closing the thread and then the discussion started again.
>
> Looking over the thread, it looks like the pull request was made before there was any discussion. I don't see the value of trying to move something forward without any vetting. There seems to be an element of panic going on in trying to push the COGA candidates through, sort of an 'Anything is better than nothing' attitude. That's a false dichotomy. Scrutiny and conversation will advance and refine the issues towards incorporation. Trying to move draft proposals in bulk to the next stage without addressing contentious and unresolved issues ("kicking the can down the road") is not a good strategy.
>
> The discussion is meant to move to the pull request
>
> It looks like the pull request has also been closed. Where are we supposed to post questions and issues for this topic now?
>
> TO be honest I find it all quite confusing.
>
> I share some of that confusion. But I don't think either of us give up easily, so let's adapt the system for ourselves to make it work.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@lseeman, now that you've introduced the idea of a 1500 word list as a key measure, I'd like to get back to @jspellman's suggestion about using something like Dale-Chall reading level as at least a partial solution there. Here are your starting bullets:
What would happen if you retained the double-negatives and (reworded) concrete language, and swapped out the reading level in place of the "simple, common" words, to be something like this:
You'll note that I've removed "clear" and information on present tense and active voice. I believe they have had enough feedback that they can be addressed in the Understanding document without forming part of the starting language. If you think clarity is a crucial measure that is testable, attention can be focused on helping solve that item specifically. You can also work your idea of "common" into exception language based on the context. I have removed the following part, as I would argue it is overly prescriptive. This can be introduced as a technique, involving personalization.
Still lots of space for wordsmithing, but I think these three items do go a ways to achieving the goal. |
In regard to your starting text
I would still like to understand the rationale for appending "which require a response to continue". What scenario are you avoiding? However, if you feel it is necessary, at the least the comma should be removed after "messages" so that the phrase is clearly qualifying error messages and not the rest of the sentence. I think there has been enough discussion about the relevance of existing SC language that you do not need the additional specific items for "also on controls" and "also on instructions". |
@mbgower it precludes logs, and warning etc. It makes it only stuff that stops the user from continuing. In other words, it reduces the scope towards the absolute essential stuff that you need to use an app or website. Considering the resistance it should be clear why we need to do that. Latest draft has got rid of the also on controls, and has intergrated in instuctions are clearly identified. |
Okay, so by "response" you mean acknowledgement, like having to click "okay", etc. Removing the comma will help with that, then. Thanks. |
@mbgower we have been asked not to worry about potential redundency at this point.
(My 2 cents - The redundancy issue is why we should be allowed to change sc's . It is not a good reason to not address accessibility as well as we can for all users)
|
Updated the issue description to reflect the FPWD text and reopening issue. |
Similar to logos being exempt from color constrast requirements, I would expect product and brand names to be exempt from plain language requirements. |
@CharlesBelov yes, that makes sence |
@cstrobbe Thank you for your well reseached comments. - and for finding the word frequency scripts. i had it on my to do list to write one, but now I do not have to. |
new proposed language that adresses most of the comments Error messages that require a response to continue, instructions, labels and navigational elements use language , so that all of the following are true:
|
I am wary of setting a standard to "double negative" due to either negative polarity ( of Latin and German-bsed) languages verses the negative concord of languages uch Portugese, Russian Spanish. Therefore internationalization would become an issue with this standard seemingly. https://en.wikipedia.org/wiki/Double_negative Maybe this could be addressed? |
@CityMouse the Negative concord is allowed with the new wording. You only can not make a positive statements using double negatives. That is the change to support internationalization . |
a new sources http://www.minspeak.com/CoreVocabulary.php#.WQ8EzuUrI2w |
On today's call (in the extended time), I proposed a departure from the current approach to Plain Language, which I was asked to draft. Here it is: Proposed SCClear Instructions: Instructions describe the topic or purpose. BackgroundThis is a direct use of existing SC language in 2.4.6 to plug a hole between 2.4.6 and 3.3.2 that results in labels needing to be present and descriptive, but instructions just needing to be present.
There can be a lot better language than the short description I've supplied. (e.g., "Instructions describe the desired user action or behaviour.)" I simply lifted the existing 2.4.6 language since it already passed at 2.0, and therefore should be serve as a sample of a simple and relatively vague goal which nonetheless has existed in WCAG SC langauge for the past decade. What it addressesBy focusing on the lack of a requirement to make instructions descriptive (or clear), this SC immediately opens up the possibility of introducing a bunch of techniques that can be used to address COGA TF objectives.
The techniques are very high level and undeveloped for 2.4.6. So, all the points that were trying to be given the weight of an SC requirement could become techniques that can be employed for both 2.4.6 and this new Clear Instructions SC:
It could also draw on neglected parts of the following proposed SCs to add additional techniques:
And the following are somewhat related, and could again offer possible techniques:
SummaryWhat I'm proposing is essentially a beach head that covers the ability to consider the content of instructions as an SC. It allows many of the COGA 'wants' to be baked into 2.1 as techniques. Since techniques are not normative, they can be added to and enhanced more iteratively and help drive COGA adoption. |
Do you think it would be possible to include navigation links? I could see that helping to cover more of the use-case from COGA without needing the complexity of word lists. |
Depending on what you are talking about, I would have thought those would more likely be labels, and any techniques could fall under either 2.4.6 or one of the Link Purpose guidelines. I personally would advocate moving the Link Only SC to AA from AAA, and doing an edit on the Understanding doc content. In Context is at level A, so it seems like a simple and obvious way of eliminating the "read more" kibble that litters some sites. |
NB: I think people might be missing the actual SC text you're proposing, it's very short and in bold at the top, maybe reformat that to:
In the Plain language SC it applies to error messages (that require a response to continue), instructions, labels and navigational elements. I was wondering if it would make sense to have:
(As labels and heading are already covered). However, at that stage it feels like we'd want to combine them all. (Headings, labels, instructions, nav elements, error messages...) |
I think it could include error messages, since 3.3.3 Error Suggestion is pretty loose about making the suggestion clear -- and it is a form of instruction. I'm still waiting to figure out what exactly is covered by navigational elements. Can you provide examples? |
I'd say it's the same as WCAG 2.0's Navigation mechanisms, which is to say I'm not sure we have a definition! Maybe it would be best phrases as "terms within navigation mechanisms" to match. In terms of examples, anything you would put in |
This makes a new requirement above the proposed SC, it requires authors to add instructions. What I think it is trying to say is:
|
I get if we want to hold things up to some higher standard for 2.1 than 2.0, but it is still a wee bit frustrating using the exact wording of an existing SC and having reasons listed for why that text is unacceptable :) That said, your suggested change is fine. I'm also going to alter the text to make it more relevant to the topic and incorporate Alastair's error message (still trying to figure out how navigational elements isn't covered by labels).
|
I think Mike's is a good stab, and the idea to thereby create a hook to introduce various Coga concepts as Techniques a smart idea. |
Current versions of SC and Definitions
Plain language (Minimum)
Provide clear and simple language in instructions, labels, navigational elements, and error messages which require a response to continue, so that all of the following are true:
What Principle and Guideline the SC falls within.
Under WCAG 3.1
Suggestion for Priority Level
A
Related Glossary additions or changes
A word frequency list has to be generated from at least 1000 sources from the same context.
Description
The intent of this success criterion is to ensure people can understand and use navigational elements, user interfaces, and instructions. Clear language for all content is an important accessibility principle. However, if the user does not understand words and terms in these critical areas, the whole application or web site often becomes unusable.
A real-life example is a person, with mild dementia, trying to use an application to turn on a heating and air conditioning unit. The menu item for selecting heat or air conditioning is labeled "mode". The user does not know that "mode" refers to heat or to air conditioning, and thus cannot use the whole unit because of this one term.
In this real-life example (reported by a task force member), a visitor turned on an air conditioner and did not turn it off when leaving the dwelling. The weather became a bit cooler. The user, who could not turn on the heat because of the language used, became hypothermic, and needed emergency treatment.
People with dementia have impaired short-term memory, and difficulty remembering new information. Therefore, learning and remembering new terms can be impossible. However, if an interface uses familiar terms and design, it is fully usable. Not being able to use these applications mean that more people require live-in help, and lose their independence.
In another example, many task force members cannot use GitHub because the terms it uses are not typical for functions (such as "push" instead of "upload").
Some users, particularly those on the autism spectrum, will have difficulty with figurative language, as they will try to interpret it literally. This will frequently lead to the user to failing to comprehend the intended meaning, and may instead act as a source of stress and confusion. (Taken from ETSI)
It should be noted that restrictions on scope make it practical from the content providers' perspective, and the exceptions ensure it is widely applicable. For example, error messages, which require a response to continue, are being included as a level A because, without understanding these messages, the user is completely unable to continue. Error messages, which do not require a response, may be frustrating, but do not always make the whole application unusable.
Benefits
This supports those who have reading difficulties, language disabilities, and some visual perceptual difficulties. It can include people with intellectual disabilities, Receptive Aphasia, and/or Dyslexia, as well as those with general cognitive learning disabilities. This supports those who have Dementia, and/or acquire cognitive disabilities as they age.
Related Resources
Stroke Association Accessible Information Guidelines http://www.stroke.org.uk/professionals/accessible-information-guidelines
Computers helping people with special needs, 14 international conference ICCHP 2014 Eds. Miesenberger, Fels, Archambault, et al. Springer (pages 401). Paper: Never Too Old to Use a Tablet, L. Muskens et al. pages 392 - 393.
Phiriyapkanon. Is big button interface enough for elderly users, P34, Malardardalen University Press Sweden 2011.
[i.49] Vogindroukas, I. & Zikopoulou, O. (2011). Idiom understanding in people with Asperger syndrome/high functioning autism. Rev. soc. bras. fonoaudiol. Vol.16, n.4, pp.390-395.
NOTE: Available at http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1516-80342011000400005&lng=en&nrm=iso .
[i.50] Oi, M., Tanaka, S. & Ohoka, H. (2013). The Relationship between Comprehension of Figurative Language by Japanese Children with High Functioning Autism Spectrum Disorders and College Freshmen's Assessment of Its Conventionality of Usage, Autism Research and Treatment, vol. 2013, Article ID 480635, 7 pages, 2013. doi:10.1155/2013/480635.
NOTE: Available at http://www.hindawi.com/journals/aurt/2013/480635 /.
[i.51] de Villiers, P. A. et al. (2011). Non-Literal Language and Theory of Mind in Autism Spectrum Disorders. Poster presented at the ASHA Convention, San Diego.
NOTE: Available at http://www.asha.org/Events/convention/handouts/2011/de-Villiers-de-Villiers-Diaz-Cheung-Alig-Raditz-Paul/ .
[i.52] Norbury, C. F. (2005). The relationship between theory of mind and metaphor: Evidence from children with language impairment and autistic spectrum disorder.; Oxford Study of Children's Communication Impairments, University of Oxford, UK; British Journal of Developmental Psychology, 23, 383-39.
NOTE: Available at http://www.pc.rhul.ac.uk/sites/lilac/new_site/wp-content/uploads/2010/04/metaphor.pdf.
[i.53] Language and Understanding Minds: Connections in Autism; Helen Tager-Flusberg, Ph.D; Chapter for: S. Baron-Cohen, H. Tager-Flusberg, & D. J. Cohen (Eds.), Understanding other minds: Perspectives from autism and developmental cognitive neuroscience. Second Edition. Oxford: Oxford University Press.
NOTE: Available at http://www.ucd.ie/artspgs/langimp/TAG2.pdf.
Neilson-aging
Top Five Instructional Tips for Students with Down syndrome"http://specialedpost.org/2013/01/31/top-five-instructional-strategies-for-students-with-down-syndrome/
http://www.autism.org.uk/working-with/autism-friendly-places/designing-websites-suitable-for-people-with-autism-spectrum-disorders.aspx (downloaded 08/2015)
Students with Down Syndrome, http://www.downssa.asn.au/__files/f/3203/A%20Student%20with%20Down%20Syndrome%202014.pdf
Task force links
Issue papers
COGA Techniques
Testability
This success criterion is testable if each of the bullet points are testable. If the content fails any bullet point, it is not conformant to this success criterion. If it passes all of the bullet points, it is conformant.
Bullet points:
Tense and voice are objective, and hence are verifiable. (It is expected that natural language processing algorithms will be able to conform to this automatically with reasonable accuracy.)
Testing for exceptions:
If present tense and active voice have not been used, the tester will need to confirm if one of the exceptions is relevant. If an exception is not relevant; and present tense and active voice have not been used; then the content fails this success criterion.
Even languages with a small number of users have published lists of most-frequent words (such as Hebrew). If there is a natural language that does not have such a list, algorithms exist that calculate these lists for languages, or for specific contexts. Testing content against these word lists can be done manually. However, it is expected there will be a natural language processing testing tool by the time this goes to CR. (It is already integrated into a tool by IBM.)
Testing for exceptions is as discussed above.
Use of double negatives is a fact, and hence is verifiable. It is assumed a natural language processing tool will also test for this. Testing for exceptions is as discussed above.
Non-literal text, such as metaphors, can be identified when the meaning of the sentence is something other than the meaning of the individual words. This is human testable. Cognitive computing algorithms can test for this as well.
If the text is not literal, then the tester must confirm that personalization and an easy user setting enables it to be replaced, such that all meaning is retained.
This can be tested by identifying the function of the control, and checking if it is identified in the label.
This is human testable by completing the instructions literally, and confirming that the effect is correct.
Techniques
The text was updated successfully, but these errors were encountered: