The fundamentals of accessibility on the web are defined in the Web Content Accessibility Guidelines (WCAG). They establish a set of principles to make web content as accessible as possible to the widest range of users, and as such focuses particularly on the needs of those with disabilities.
The guidelines are organised into four main principles:
- Perceivable: text should be sufficiently distinct from its background and images should have a text alternative;
- Operable: the website should be fully useable with both mouse and keyboard and on a range of devices;
- Understandable: the information presented should be readily understandable and the means of operating the site should be apparent and intuitive;
- Robust: the site should operate on a broad range of devices and user agents and should “gracefully” manage new technologies such that functionality and accessibility is maintained as the environment advances
Under these principles WCAG sets out 61 criteria that either must, should or may be implemented. Understanding and applying all these criteria requires considerable expertise and attention to detail. Websites can behave differently across various browsers, devices, and assistive technologies. Users will interact with a website in a vast number of ways which can be a challenge for testers to anticipate.
Furthermore, while automated tools can speed up the testing process, they can only identify a fraction of the potential accessibility issues. Automated tools work by scanning the website’s code to look for specific patterns that might indicate accessibility issues. They’re great at identifying technical issues, like missing alt text, improperly nested headings, or incorrect ARIA usage. However, the most significant aspects of digital accessibility can’t be adequately evaluated by just looking at code.
For instance, an automated test can determine if alt text exists for an image but can’t evaluate if it adequately describes the image in context. Similarly, it can’t assess if the content is logically organized and easy to understand or if interactive components are intuitive to use.
For circumstances such as these the expertise of the software tester comes to the fore. However, with the recent advent of practical, usable AI technologies is there scope for test tools that can make a more “human” assessment of the less tangible elements of an accessible interface?
Accessibility and Large Language Models
Large language models, such as ChatGPT are able to understand the content and context of a website to a certain degree. This enables it to take a step further than simply establishing the presence of an alt tag for an image and, if the tag is present, to make a judgement on whether an image is complementary to the content it is illustrating.
Several companies have started to leverage AI for accessibility testing. For example, Microsoft’s Accessibility Insights draw upon AI technologies to help developers find and fix accessibility issues in their applications. Similarly, AccessiBe uses AI technology to scan and adjust websites for accessibility compliance automatically.
There’s no doubt that AI is enhancing the capabilities of automated accessibility testing technologies. However, AI is not yet a silver bullet for accessibility testing. While it can augment and improve testing processes, it can’t replace the need for manual testing and user testing. Real users’ experiences, especially those with diverse abilities, are essential in building a truly accessible web.