Every once in a while, somebody mentions to me how they’re concerned because their (or my) site didn’t “pass” some online accessibility evaluator or another. This always opens up the conversation for one big, complicated issue: why automated accessibility testing just doesn’t work.
This isn’t to say that automated testing doesn’t have a place; but it should never be considered the deciding factor for accessibility.
The Functional Accessibility Evaluator from the University of Illinois at Urbana-Champaign was pointed out to me recently. Naturally, I figured it was worth a look. In fact, it’s very interesting. (It has problems, but I’ll get to those.)
First of all, the results break your evaluation into five basic categories: Navigation and Orientation, Text Equivalents, Scripting, Styling, and HTML (HyperText Markup Language) Standards. This is a nice, user-friendly way of dividing the problems — exposing, in particular, a concern for the pure functionality of the navigation which often goes unmentioned.
The judging of your performance in a given category is a percentage, divided between Pass, Warn and Fail.
Let’s take a quick look at one category in which my own website had notably mixed results: Navigation & Orientation.
I received a 75% pass, with a 12% warn and a 12% fail. It’s a little unclear what these percentages actually mean, but the details report gives you a lot more information. Specifically, I failed because the text content of my h1
element did not match “all or part” of the title
element.
Really? That’s a surprising result! The h1
element of this site is “Joe Dolson: Accessible Web Design.” The title
element of the home page is “Joe Dolson Accessible Web Design: Designing websites with accessibility and usability in mind.”
Now, the fact that I failed this obviously points out a weakness in the comparison — it was not able to detect that the only real difference in the common phrase was a single colon. This is a perfect example where a human tester should quickly identify that there is no real problem.
Further, a discerning human tester might (as I do) take issue with the grounds for this particular test itself. The test reflects the system designer’s opinion that the h1
element and the title
element should always reflect the specific, unique contents of the page. Whoa, there! While not everybody may agree with me that the h1
can very reasonably be used to identify the site you’re on, rather than the page; I think that this is not a judgement which should be used to determine a site’s accessibility.
Now, this isn’t actually the only ground on which my Navigation & Orientation was slammed. I was also informed that within my navigation bars, it is a best practice that “each ol
or ul
element that appears to be a navigation bar should be immediately preceded by a header element.”
I’m not sure, honestly, whether this is a fair judgement. Should all navigation bars be preceded with a navigational label which is a heading? I can see that this would be of benefit to screen readers, since they would be able to make use of these headings to quickly navigate between each menu. It’s not something I generally do, although I’ll usually label the main navigation area of a site with a heading element. (I’m interested to hear comments on this.)
Conclusion
On the whole, I find the Functional Accessibility Evaluator to be quite interesting and to provide good value. It’s a great natural-language approach to identifying potential accessibility problems. However, despite this, the automated nature of the testing exposes numerous of the standard difficulties with any kind of automated results. Namely, at any point where they deviate from precise guidelines, they expose private bias on the issues; and they are not capable of making fine distinctions between “pass” and “fail.”
Joe Dolson
Ultimately, that’s a significant part of any automated test: to provide a few more things to think about. It may not actually tell you anything specifically useful, but if it makes you think about a few items you hadn’t thought through thoroughly, it’s served a valuable purpose.
(Woo! “thought through thoroughly” — how do you like that construction?)
Ian Macfarlane
That tool’s really useful – I’ve run it over our blog post about DDA Compliance, and (thankfully, irony attack prevented) it pretty much completely passed, but it’s given us a few more things to think about!
Ian
Dan
Great post Joe!
I tested my own site with this and found that it was warned or failed for the same reasons yours was. What is wrong with your main navigation being preceded by your ? I know it’s nice to give each menu a title but I think it’s pretty clear that a menu consisting of Home, About Me, Galleries etc. is the main navigation and as such doesn’t really require a title. Pretty much all of my other navigation is preceded by a heading.
I think the tool is a good one, but it MUST be accompanied by a human test to weed out the crap results.
Dan
Joe Dolson
That was pretty much my conclusion. Nice idea, nice way of approaching the tester, helpful way of explaining problems and providing comments…lousy decision making.
Thanks, Patrick!
patrick h. lauke
i briefly mentioned FAE at the joint power session i ran with ian lloyd at this year’s SXSW (see Accessified! Practical accessibility fixes any web developer can use, about 3/4 down). in short, i like the approach, but some of the heuristics it applies are dubious at best…heavily based on the creators’ opinion, and in many situations very dumb (from a machine/coding point of view). for instance, when i tested it, it wouldn’t recognise the perfectly legitimate practice of having an image with proper alt wrapped in a heading element, and would fail me for having an empty heading. ah well, i like the approach, but the ruleset definitely needs refining.
Joe Dolson
No argument, there!
Stevie D
I think that’s the point I was trying to make as well! Which is why it is unhelpful for the accessibility test to recommend using accesskeys and to recommend which ones to use.
Joe Dolson
True. But the law does. As much as it’s a valid argument that the internet can be and is accessed globally, guidance on accessibility not provided on a global basis — each country offers its own rules and suggestions, bound in law or not.
I think it’s unreasonable to suggest to somebody that they should need to know every country’s accessibility guidelines — and worse, need to choose from among them.
The fact is, I think that access keys shouldn’t be defined by any site. Even disregarding any specific country’s recommendations, the sheer existence of access key assignments can cause more problems than it realistically solves.
Allowing the user to define and set their own preferred access keys is the only system which allows users to ensure that the access keys they prefer are available and ensures that the access keys available will least interfere with the user’s other software.
I can’t support the idea of choosing any one country’s recommendations on any point; everything should be judged according to the developer’s well-considered best practices and against the audience for that web site.
The guidelines and laws are a valuable resource, but if I disagree that a particular recommendation appropriately addresses an accessibility problem, I’ll make some OTHER decision.
It doesn’t ultimately matter that we shouldn’t have differences between guidelines. We do. They are. We have to deal with it as best we can.
Stevie D
Sorry, but I don’t buy that. If we were looking at the difference between Section 508 and the DDA as regards physical buildings, sure, you can have differences between the guidelines and requirements in different countries. But the internet doesn’t work like that.
If a computer user, who uses accesskeys and knows the system that works on the sites they use regularly (which will probably be based in their own country) then goes to a site based elsewhere, should they have to learn a new “language” of accesskeys? And how should I, as a web designer, know which of the comments the automated tester spits out are international and which are specific to one country (and which country?)?
That doesn’t just apply to, eg, someone going to a tourist information site in a foreign country. I read web design blogs from the UK, the USA and Scandinavia. It doesn’t matter to me where they are based, as far as I’m concerned, they’re on the internet, and they don’t have a country specific TLD. If access tools are going to work differently in each of them, I can’t sanction promoting them. Until we have internationally agreed standards for accesskeys, which don’t clash with browser commands in any existing browsers or assistive technology, they are no good at all, and any automated tester that encourages their use is being distinctly unhelpful in doing so.
Sorry, rant over!
Joe Dolson
That’s actually a bit of an over-simplification, Jermayn — accessibility testing should always include human guidance and decision making. However, it’s a very good idea to use mechanical accessibility tools as well.
Both types of testing have their own problems: people miss things, and computers can’t discern the finer details (discriminating between appropriate and inappropriate alt text, for example.)
The advantage to automated accessibility testing is that the computer won’t miss anything. If you have an error which the computer is capable of detecting, the computer will find it. It just requires a certain degree of human knowledge and decision making to differentiate between the actual accessibility problems picked up by the computer and those which are not.
Jermayn Parker
I am in no way the same league as you or even the commentators but from what I know and understand, accessibility testing should really always be done by a human.
Mike Cherim
I can’t say I really care for the UIUC tester. My favorite is WebXact (the most accurate results that I know of and slightly better than Cynthia, but the one I use most is Cynthia. The reason is I can link to it by way of my web dev tool-bar for Firefox (GrayBit.com can be loaded onto that toolbar, too, I happily report). I can also add the regular Cynthia (and GrayBit) link right to my page just the way the markup validator can be — one-click to check a specific page. It’s easy to do with a little PHP (Hypertext PreProcessing). Can’t do that with WebXact.com.
Joe Dolson
Yeah, I had a lot of issues with the judgements passed, as well. Didn’t have the time to go into all of them – but thanks for mentioning these!
While I agree with you in principle, I should also point out that UK guidance simply doesn’t apply in this context, since it is a United States based system. This is, of course, one of the big problems with trying to apply systems globally!
What I like about this test is the clear manner in which it divides the objectives and the manner in which it describes the results.
The results themselves are another matter entirely!
Stevie D
Even leaving aside the utility of automated accessibility testing, there are a number of checks in that test that I have issues with.
1. “Each img element with an alt attribute should have alt text” – no it shouldn’t! IMX the majority of images should have a null alt attribute. Putting in alt text on an image that doesn’t need it does harm accessibility and user-friendliness.
2. “Each ol or ul element that appears to be a navigation bar or menu should be immediately preceded by a header element (h2..h6)” – if there are several navigation lists, this may be appropriate – but if there is only one, it doesn’t need a heading element.
3. The accesskeys recommended do not match up with UK government guidance. If official guidance on accesskeys is not globally accepted, no recommendation should be made to use a particular set of keys.
4. No mention of [link] navigation elements made at all.
5. It gives a “fail” for the use of [b] and [i] elements, despite the fact that these are perfectly legitimate elements to use where appropriate. Ideally I would like to see it give a warning for these – yes, most of the time they should be [strong] and [em], so it should flag it up as a potential issue, but to fail a site for using a legitimate element correctly is just plain wrong.
6. I agree that “The text content of each h1 element should match all or part of the title content” is a load of hogwash. Sometimes [h1] is used for the site title, which may not be appropriate to include in the [title].
On the plus side, at least it passes its own tests, which many of its predecessors haven’t managed to do!