Usability is simply a way of describing how easy a website is to understand and use. Usability can be measured objectively—for example, by timing users who are trying to perform a standardized task—and subjectively, by interviewing site users about their impressions. Usability is an even broader goal than accessibility, which refers to how easily a website can be used, understood, and accessed by people with disabilities.

Even on the most accessible websites, it can be difficult for users to find what they are looking for. How do you assess and improve the usability of an accessible web site? And how do you adapt your usability tests to include users with visual impairments? Here are some practical strategies and helpful tips.

Why Do Usability Testing at All?

  • If done early and often, testing saves time and effort.
  • Usability tests reveal whether you are meeting your audience's needs, and show you ways to improve.
  • Test results can serve as a neutral means of convincing the designers, your boss, other departments, that changes are necessary.

Bottom line: Only users can tell you whether your site is a success. And only users of assistive technology can tell you if your site is truly accessible.

Why You Should Include Users With Disabilities as Participants in Your Usability Tests—And How to Go About It

It is important to include participants who will be accessing your site with a screen reader or with screen magnification software, in addition to sighted users who navigate web sites with a mouse. The number of people using assistive technology is growing. A Forrester survey commissioned by Microsoft found that 40% of computer users are likely to benefit from the use of accessible technology due to mild difficulties or impairments. That's about 51.6 million people.

Try to find people who use Window-Eyes as well as JAWS (two common screen readers), and a variety of screen magnification programs, as well. Different programs handle web elements in different ways, and you will want to resolve any resulting usability issues.

But that does not mean that you have to create a usability lab that supports all of the different types of hardware and software. It may be more effective to observe people using their own familiar equipment. This approach will show how your site works on the computer setups that people really use—often older and slower than the equipment in a modern technology lab.

If you are interested in improving the accessibility of your site and need to find users with visual impairments or other disabilities, consider contacting the local rehabilitation agency. They may be able to connect you with some of their clients, or perhaps an assistive technology instructor who would be happy to promote the cause of web accessibility. Another resource to consider is the local university's office of disability services.

Different Types of Usability Testing

There are many different ways to approach usability testing, and you will probably want to use more than one:

  • testing language and categorization by various means, such as card-sorting exercises
  • quantitative tests that gather objective metrics, such as the time a task requires, the error rate, which keystrokes were used for navigation, etc.
  • "user's experience" interviews, measuring users' subjective satisfaction, why they would visit one site as opposed to others, and even how people with visual impairments conceptualize the Web
  • heuristic review—experts evaluate the site using a list of established usability standards

Tips for Remote Testing

Conducting usability tests in a normal environment like the users' own homes or workplaces can be very effective. If users are more at ease, they will be more candid. Plus, keeping the "respondent burden" low will make it easier for you to recruit participants! Here are a few things to keep in mind if you find yourself doing usability testing over the phone.

  • Encourage users to think aloud.
  • Remind them to express their likes and dislikes while they are using the site.
  • Stay neutral.
  • Take good notes.

In-person testing is the gold standard, but remote testing can help you widen your reach.

A Case Study

The American Foundation for the Blind recently underwent a major redesign in order to improve the site's usability and create a more logical, user-friendly information architecture. After identifying our audiences and their needs, we did four rounds of usability testing.

In Round 1, we worked out language and concepts. In Round 2, we evaluated HTML mockups (since paper mockups would not have been accessible to our blind or visually impaired users). In Round 3, we settled on a preferred prototype and subjected it to more rigorous tests. And in Round 4, we fine-tuned a live site.

Round 1: Language and Concepts

It is important to figure out your site's categories and concepts before you begin designing. An effective, usable design depends on the designers having a clear understanding of the site's purpose. Who do you hope will use the site? What language sounds natural to your users and will guide them to the information they need?

Here are some tips for conducting this preliminary round of usability testing:

  • Ask a wide variety of users for their opinions
  • Present them with word choices, but be open to suggestions
  • Ask users what information they would expect to find if they clicked on a particular word or phrase—are you using keywords and categories that make sense to them?
  • Include users with disabilities at every stage.
  • Talk to people who are not closely connected to your organization.

This process can be done by phone or e-mail, as well as in person.

Round 2: Choosing Between Mockups

We used HTML mockups so that users who were blind or visually impaired could participate in the testing and explore the layout independently. At this stage, you will want to include users with a wide range of:

  1. technical skill
  2. access methods
  3. professional backgrounds
  4. informational needs

The important thing is to find out users' impressions of each mockup. Here are some of the sample questions from Round 2:

  • "What is the first thing you noticed?"
    (Is it what you'd hoped?)
  • "Who do you think this site is intended for?"
  • "What would you expect to be able to do on this site?"
  • "How does this site make you feel?"

Although at this point you are trying to solicit feedback about the layout and overall design, you should continue testing the language's effectiveness. Are any of the terms causing confusion? Do your users have any suggestions that would help clarify your meaning?

Round 3: Testing the Prototype

Be sure to write a script ahead of time. This will help you stay neutral in your phrasing, and make sure that you cover all of the important topics.

Use tasks suggested by the Round 2 users. What did they say they wanted to be able to do on your site? Are users now able to do those things easily? Try to include people who fall into the different audience groups you identified early on.

Look at how information is prioritized—do your choices make sense to the users? Also, don't neglect any accessibility issues that are brought to your attention. If users notice a problem that is hindering their access to the site, fix it. These minor improvements will allow the participants to stay focused on the site's usability.

Sample Feedback from Round 3

  • lots of opinions on color, both on aesthetic grounds and comments on readability—contrast is a very important issue for both accessibility and usability
  • opinions on text size, and distinctions between headings
  • comments about which section of the page was most attention-grabbing
  • observations on grouping, categorization, home page placement
  • minor language suggestions

Round 4: Testing a Live Site

For this final round of testing, the site should be fully functional. Cast a wide net—consider snowball sampling and opportunity sampling to find new users. "Opportunity sampling" simply means that you position yourself somewhere you are likely to find users who fit your desired criteria, and take your sample from people who are available at the time. Good opportunities might include conferences or events for people with disabilities, rehabilitation agencies, or even senior citizen centers.

"Snowball sampling" involves finding one or two key people who fit the profiles you're looking for and then asking them to recommend people similar to them who would be willing to be interviewed. By moving steadily away from the people you know, you can reduce bias and expand your group of potential testers.

By Round 4, you will have fewer "what would you do?" questions, and more observation of what they actually do. Look for glitches and confusion—add escape hatches. For example, if users keep going into the "wrong" category looking for a particular document, either move the article or make it a related link.


  • The user is the expert.
  • Good usability contributes to better accessibility—the two are tangled up.
  • Test early and often.
  • Your usability work is never done—keep testing!