Every once in a while, somebody mentions to me how they’re concerned because their (or my) site didn’t “pass” some online accessibility evaluator or another. This always opens up the conversation for one big, complicated issue: why automated accessibility testing just doesn’t work.
This isn’t to say that automated testing doesn’t have a place; but it should never be considered the deciding factor for accessibility.
The Functional Accessibility Evaluator from the University of Illinois at Urbana-Champaign was pointed out to me recently. Naturally, I figured it was worth a look. In fact, it’s very interesting. (It has problems, but I’ll get to those.)
First of all, the results break your evaluation into five basic categories: Navigation and Orientation, Text Equivalents, Scripting, Styling, and HTML (HyperText Markup Language) Standards. This is a nice, user-friendly way of dividing the problems — exposing, in particular, a concern for the pure functionality of the navigation which often goes unmentioned.
The judging of your performance in a given category is a percentage, divided between Pass, Warn and Fail.
Let’s take a quick look at one category in which my own website had notably mixed results: Navigation & Orientation.
I received a 75% pass, with a 12% warn and a 12% fail. It’s a little unclear what these percentages actually mean, but the details report gives you a lot more information. Specifically, I failed because the text content of my
h1 element did not match “all or part” of the
Really? That’s a surprising result! The
h1 element of this site is “Joe Dolson: Accessible Web Design.” The
title element of the home page is “Joe Dolson Accessible Web Design: Designing websites with accessibility and usability in mind.”
Now, the fact that I failed this obviously points out a weakness in the comparison — it was not able to detect that the only real difference in the common phrase was a single colon. This is a perfect example where a human tester should quickly identify that there is no real problem.
Further, a discerning human tester might (as I do) take issue with the grounds for this particular test itself. The test reflects the system designer’s opinion that the
h1 element and the
title element should always reflect the specific, unique contents of the page. Whoa, there! While not everybody may agree with me that the
h1 can very reasonably be used to identify the site you’re on, rather than the page; I think that this is not a judgement which should be used to determine a site’s accessibility.
Now, this isn’t actually the only ground on which my Navigation & Orientation was slammed. I was also informed that within my navigation bars, it is a best practice that “each
ul element that appears to be a navigation bar should be immediately preceded by a header element.”
I’m not sure, honestly, whether this is a fair judgement. Should all navigation bars be preceded with a navigational label which is a heading? I can see that this would be of benefit to screen readers, since they would be able to make use of these headings to quickly navigate between each menu. It’s not something I generally do, although I’ll usually label the main navigation area of a site with a heading element. (I’m interested to hear comments on this.)
On the whole, I find the Functional Accessibility Evaluator to be quite interesting and to provide good value. It’s a great natural-language approach to identifying potential accessibility problems. However, despite this, the automated nature of the testing exposes numerous of the standard difficulties with any kind of automated results. Namely, at any point where they deviate from precise guidelines, they expose private bias on the issues; and they are not capable of making fine distinctions between “pass” and “fail.”