accessibility

First impressions of automated checkers for WCAG 2.0 AA

A week ago I finished my assignment on automated checkers for WCAG 2.0 AA. I asked it to check the home page of another blog of mine and it spewed out 223 “potential problems.” (A classmate told me she got more than a thousand.) I drudgingly went through the list and at the end what did I find? Three legitimate concerns. Yes, three out of 223 were all I could find. That’s an accuracy of just slightly over 1%. Granted, from an AI point of view I know that we don’t talk about accuracy but rather about recall and precision, and a lot of the bogus warnings do concern deep AI problems such as how human language can be understood. Or impossible-to-solve ones like guessing the author’s intent. That said, some of those warnings—especially those related to standard third-party Javascript library API calls, standard icons, or, incredibly, non-breaking spaces—are just incredulous. I’m not hating the checker or have anything against it per se, but for these checkers to be taken seriously they really have to get better. An accuracy of 1% is not going to work.

WCAG first impressions

Since some of us are talking about aChecker, I threw my own site at it and it spewed out a slew of complaints. I didn’t assume my site was flawless (in fact I knew it had many problems), but the amount of complaints it threw at me was just too much.

I mean, some of what it spew back at me was justified. (For example, I didn’t know WCAG requires the lang attribute to be tagged onto the HTML element instead of the BODY element—not that the requirement made any sense.) But some of it was just bogus. Contrast problems for non-textual elements that happens to be text? With the advent of webfonts, textual data can be anything (especially when people have started talking about using specialized dingbat fonts as a replacement for graphics). You just can’t infer that a piece of textual data will be actual text, especially when the glyphs concerned are obviously symbols.

The problem is that these requirements are divorced from both the context of what the text is and how the text is actually used.

So I wonder: Will the mandatory adoption of WCAG actually produce the opposite effect of what is intended? Will people, out of the requirement (as opposed to the desire) to be compliant with the WCAG, forego simple text for graphics, throwing the web back to where it was 10 years ago when heavy graphics ruled? I don’t want to believe in this, but if WCAG 2.0 AA is going to become mandatory, I think this will be a very real possibility.

Syndicate content