Automated accessibility scanners can catch many issues: missing alt attributes, broken ARIA references, insufficient color contrast. But they cannot evaluate whether a screen reader user actually understands the flow of a checkout form, or whether a switch device user can reach every interactive element in a reasonable number of steps. User testing with assistive technologies fills that gap by putting real people in front of real interfaces and observing what happens.
This type of testing goes beyond compliance checking. WCAG conformance is necessary, but a page can technically pass every success criterion and still be confusing or unusable when navigated with a screen reader. Labels might be present yet misleading. Focus order might be logical on paper yet disorienting in practice. Live reading order might differ from visual order in ways that only surface when someone listens to the page sequentially. These problems are discovered when actual assistive technology users interact with the product.
Why user testing with assistive technologies matters
Roughly 15% of the global population has some form of disability, according to the World Health Organization. Many of these people rely on assistive technologies daily. When a site is only tested with a mouse and a visual browser, entire categories of interaction go unverified.
Automated tools typically catch 30–40% of WCAG issues. The remaining issues require human judgment: Is this live region announcement helpful or noisy? Does the heading structure make sense when you cannot see the visual layout? Is the custom drag-and-drop widget operable with voice control? These questions have no automated answer.
Without this kind of testing, organizations risk shipping interfaces that are technically "accessible" according to a scanner yet practically inaccessible to the people the guidelines exist to protect.
How user testing with assistive technologies works
Choosing assistive technologies
Select tools based on the disabilities most relevant to the product and its audience. Common pairings include:
- NVDA or JAWS with Firefox or Chrome on Windows
- VoiceOver with Safari on macOS and iOS
- TalkBack with Chrome on Android
- Dragon NaturallySpeaking for voice control
- Switch Access on Android or Switch Control on iOS
Testing with at least one screen reader, one voice control tool, and keyboard-only navigation covers a broad range of interaction patterns.
Recruiting participants
The most informative sessions involve people who use assistive technologies as part of their daily routine. These users have muscle memory, preferred settings, and real-world coping strategies that a sighted developer toggling on a screen reader for the first time does not. Recruit participants with varying levels of experience and with different assistive technology setups to get a wider view of potential issues.
Structuring sessions
Give participants realistic tasks rather than asking them to "look around." A task like "find the shipping cost for your cart" or "change your notification preferences" forces engagement with specific UI components. Record the session (with consent) so the team can review interactions later. Note where users hesitate, backtrack, or express confusion.
Interpreting results
Not every difficulty is a code defect. Sometimes users are unfamiliar with a particular widget pattern. But repeated confusion across multiple participants points to a real barrier. Combine findings with WCAG success criteria references so developers know both what to fix and which guideline applies.
Code examples
A common finding in user testing is that custom interactive components lack proper ARIA roles and states, making them invisible or confusing to screen reader users.
Bad example: custom toggle with no accessible state
<div class="toggle" onclick="toggleDarkMode()">
<span class="toggle-label">Dark mode</span>
<span class="toggle-switch"></span>
</div>
A screen reader announces this as generic text. The user hears "Dark mode" but has no indication that it is a control, what type of control it is, or whether it is currently on or off. There is no keyboard interaction either.
Good example: accessible toggle with role, state, and keyboard support
<button
type="button"
role="switch"
aria-checked="false"
aria-label="Dark mode"
onclick="toggleDarkMode(this)"
onkeydown="handleKeyDown(event, this)">
<span class="toggle-switch"></span>
</button>
<script>
function toggleDarkMode(el) {
var pressed = el.getAttribute("aria-checked") === "true";
el.setAttribute("aria-checked", String(!pressed));
}
function handleKeyDown(event, el) {
if (event.key === "Enter" || event.key === " ") {
event.preventDefault();
toggleDarkMode(el);
}
}
</script>
A screen reader now announces "Dark mode, switch, off" and updates to "on" when activated. The <button> element provides native keyboard focusability and click handling. The role="switch" and aria-checked attributes communicate the component type and its current state.
Bad example: form error announced too late
<form>
<label for="email">Email</label>
<input type="email" id="email" name="email">
<span class="error" id="email-error"></span>
<button type="submit">Subscribe</button>
</form>
If validation injects an error message into #email-error after submission, a screen reader user who has already moved focus past that element will never hear the error. User testing reveals this because the participant submits the form and has no idea anything went wrong.
Good example: error linked to input and announced via live region
<form>
<label for="email">Email</label>
<input
type="email"
id="email"
name="email"
aria-describedby="email-error"
aria-invalid="false">
<span class="error" id="email-error" role="alert"></span>
<button type="submit">Subscribe</button>
</form>
<script>
function showError(input, message) {
var errorEl = document.getElementById(input.id + "-error");
errorEl.textContent = message;
input.setAttribute("aria-invalid", "true");
input.focus();
}
</script>
The role="alert" on the error container triggers an immediate screen reader announcement when its content changes. The aria-describedby association means the error is also read when the input regains focus. Setting aria-invalid="true" tells the user the field needs correction. User testing confirms that participants hear the error and know which field to fix.
Combining user testing with other methods
User testing with assistive technologies works best alongside automated scanning and manual audits. Automated tools flag the low-hanging fruit. Manual audits by trained testers catch structural and semantic issues. User testing then validates that the experience actually works for the people it is meant to serve. Running all three produces a far more accurate picture of a site's accessibility than any single method alone.
Related terms
Help us improve this glossary term
Scan your site
Rocket Validator scans thousands of pages in seconds, detecting accessibility and HTML issues across your entire site.